Michael Barkasi

About me

I’m a staff scientist for Oviedo Lab, an electrophysiology lab in the department of neuroscience at Washington University in St. Louis (WashU). I’m also an associate member of the Centre for Philosophy of Memory, researching the role of memory in perception and the phenomenology of episodic memory. Previously I was a lecturer in the department of philosophy and the philosophy-neuroscience-psychology program at WashU, a postdoctoral visitor with Harris Lab at York University studying the integration of auditory and proprioceptive feedback in the control of movement, and a postdoctoral research fellow at the Network for Sensory Research in the Department of Philosophy, University of Toronto. I completed my Ph.D. in philosophy at Rice University (Houston, Texas), where I also was one of the coauthors of the program’s textbook in mathematical logic.

Computational Neuroscience

My projects
  1. Neuromorphic circuits for speech recognition inspired by models of auditory edge detection.
  2. Molecular mechanisms behind auditory cortex lateralization, especially developing models using spatial transcriptomics data.
  3. Lateralized recurrent pathways in auditory cortex.

MERFISH accuracy estimation

Improved quality control for MERFISH data

wispack

Modeling gene transcription in space

Phenomenology


Coding Portfolio

Modeling Software

  • 2026. DACx: Digital auditory cortex, R/C++ package for running biologically realistic simulations of the mammalian auditory cortex, i.e., a “digital twin”. Network topologies are built from circuit motifs and spiking is simulated via growth-transform. [Documentation] (R package, Rcpp, C++)
  • 2025. neuronsDG, R/C++ package for estimating auto- and cross-correlation in the spiking of single neurons with dichotomized Gaussians. [documentation] (R package, Rcpp, C++)
  • 2025. wispack, warped sigmoidal Poisson-process mixed-effects models for testing for functional spatial effects in spatial transcriptomics data. [documentation] [preprint] (R package, Rcpp, C++)
  • 2024. Hidden-state Bayesian learning, simulated Bayesian reasoner who learns a reach target based on auditory feedback, intended for modeling sonification data. (R scripts, C++)
  • 2023. Generalized linear modeling (GLM), demo of task-dependent somatomotor cortex responses from simulated fMRI data. (Python, CoLab)
  • 2023. Heuristics-based physical symbol system simulation, instructional demo, algorithm solves a version of the river-crossing problem. (Python, CoLab)
  • 2022. Two-pivot reach model for tracking position through Cartesian space from raw gyroscope readings. (C++, Arduino)

Statistics and Data Analysis

  • 2025. Bayesian MCMC parameter estimation, instructional demo on how to use Markov-chain Monte Carlo (MCMC) simulations to estimate parameters with Bayes’ rule. (R scrip)
  • 2024. Kinematic data analysis, full pipeline, data synchronization, signal filtering, linear mixed-effects modelling, and nonparametric bootstrapping, used for post-processing and statistical significance testing of kinematic and accuracy data (optical and inertial) from motor control study involving reaches with movement sonification. (R scripts)
  • 2023. Sentiment analysis with deep neural network, instructional demo performing sentiment analysis of IMDb movie reviews with explanation of network opacity. (Python, CoLab, PyTorch)
  • 2023. Linear decoding (logistic regression) of motor tasks from simulated somatomotor cortex fMRI data. (Python, CoLab)
  • 2023. Single-layer neural network (McCulloch-Pitts Neuron), instructional demo on learning with the Perceptron Convergence Rule. (Python, CoLab)

Hardware and Embedded Systems

Interested in chatting about human perception or movement sonification?