Headshot of Michael Barkasi.

Welcome!

I’m a staff scientist for Oviedo Lab, an electrophysiology lab in the department of neuroscience at Washington University in St. Louis. There I model auditory-cortex responses in mice to squeaks. Before that I was a lecturer in the department of philosophy and the philosophy-neuroscience-psychology program at WUSTL and a postdoctoral visitor with Harris Lab at York University studying the integration of auditory, proprioceptive, and visual feedback in the control of movement. I’m also an associate member of the Centre for Philosophy of Memory, researching the role of memory in perception and the phenomenology of episodic memory. I previously worked as a postdoctoral research fellow at the Network for Sensory Research in the Department of Philosophy, University of Toronto. I completed my Ph.D. in philosophy at Rice University (Houston, Texas).


Movement Sonification

Before my transition to neuroscience I did behavioral psychology experiments on how auditory feedback can augment, replace, and enrich natural proprioception, improving motor learning and motor control in fast, skilled movements. I worked first through my company Performance Sonification (dissolved in 2022) and then in collaboration with Harris Lab at York University.

Part of this work involved developing ultra fast wearable embedded sensor systems for movement sonification. I worked with the Cortex M4 and ESP32, developed bare-metal digital sound synthesis techniques (two-timer pulse-width modulation, based off a class-D amplifier), used inertial sensors, designed bespoke PCB feathers (KiCad), and 3D printed enclosures (FreeCAD).

Related Publications

Diagram showing the stages of a track-cycling standing start and how movement measurements might be converted into a sound played to the rider.

Examples of Michael's computational modelling. Top: An algorithm to slice paths through 3D Cartesian space into two planes. Bottom: GLMs of fMRI data.

Computational Modelling

Motion Processing and Spatial Analysis

As part of this sonification work I wrote motion processing algorithms for both real-time processing in embedded hardware (C++) and for post-processing (R). This work involved motion detection and segmentation, coordinate transformations, input integration, path comparisons (error estimation), and time-warping (both post-processing dynamic time-warping and real-time online warping estimates).

Linear Models

Neuromatch Academy’s course in computational neuroscience got me started with GLMs for modelling task-dependent fMRI responses and linear decoders of fMRI data. I’ve written a bunch of Python code for both real public fMRI data (e.g., through the HCP), and code to generate realistic simulated fMRI data.

  • GLM modelling of task-dependent somatomotor cortex responses from simulated fMRI data (Python).
  • Linear decoding (logistic regression) of motor tasks from simulated somatomotor cortex fMRI data (Python).

Phenomenal Consciousness

In addition to my empirical stuff, I also do interdisciplinary research on how we subjectively experience the world. I’m particularly interested in how memory and sensory perception interact to afford consciousness of the past and present. Most of my work focuses on experiencing what’s not there (memories, dreams, hallucinations, VR), the feelings of presence and pastness, and the neural correlates of consciousness.

You might check out this piece and this piece I wrote on presence and digital fluency, or this paper, on how perception involves experience of the past. The paper was one of two runners-up for the essay prize at the Centre for Philosophy of Memory. I summarize the idea in a blog post.

Related Publications

Self-portrait of Ernst Mach demonstrating the first-person perspective.

Top image: photo of Michael lecturing. Bottom image: schematic of a deep neural network made for sentiment analysis.

Teaching and AI Demos

I’ve taught philosophy and cognitive science at a wide range of schools. I make philosophy relevant to STEM students, and STEM relevant to philosophy students. In addition to traditional lectures and Socratic discussion, I’m writing a series of accessible coding demos in CoLab for core models in cognitive science, such as deep neural networks and “physical symbol systems” (in the style of Simon and Newell). My teaching philosophy.

Coding Demos

  • Sample physical symbol system which uses heuristics to solve a version of the river crossing problem.
  • Single-layer neural network (McCulloch-Pitts Neuron) learning with the Perceptron Convergence Rule.
  • A deep neural network which learns to do sentiment analysis on IMDb reviews, with some explanation of why it’s hard to interpret the hidden layer.

Courses Taught

A full list of courses I’ve taught is available on my cv. Here are the most recent, with syllabi. I never taught it, but for those interested here is a sample syllabus for philosophy of neuroscience.

  • Philosophy of Mind, Washington University in St. Louis (PNP/Phil 315, winter 2024) | syllabus
  • Introduction to Cognitive Science, Washington University in St. Louis (PNP 200, winter 2024) | syllabus
  • Reading Course (Senior Capstone) in Deep Learning, Washington University in St. Louis (PNP 390, fall 2023) | reading list
  • Philosophy of Psychology, University of British Columbia, Okanagan (Phil 446, winter 2022) | syllabus
  • Philosophy of Artificial Intelligence, York University (Cogs/Phil 3750, winter 2021) | syllabus

Coding and Modelling (Summary List)

Interested in chatting about human perception or movement sonification?