top of page

Research and Ongoing Projects

Pasler, Amanda_Front Matter-2.jpg

The Otoacoustic Emission (OAE) is a fascinating auditory function that can be measured to understand the mechanics of the inner ear, and to create a general

hearing profile of those it’s used on. OAE testing provides an accurate method for assessing the auditory system. Much research has been done to further improve and

understand the processes of OAE measurement, but output accuracy still has room for improvement. The leading issue with the this measurement is noise interference.

Often, measurements are done repeatedly when noise artifacts are present to achieve agood signal to noise ratio. This is due to both external and internal noise interference.The goal of this research was to address this prevalent issue in OAE recordings, specifically targeting the under explored area of internally generated noises (IGNs) inside the human body. This was done by recording a data set of a diverse participants’ inner ear sounds.

An AI noise reduction algorithm was then trained on said data, in anticipation that it would recognize and deduct noise from given OAE measurements. This model aimed

to significantly improve the efficiency of OAE testing through improving current noise filtering capabilities. This process would simplify clinical workflows and reduce the

need for repeat testing, which ultimately benefits both patients and audiologists. The model has potential for broader applications in other auditory diagnostics—it would ideally provide a reliable hearing assessment for diverse clinical settings, not just audiological. This dataset aimed to help to inform us of previously untested connections between the acoustic output from bodily mechanisms (breathing, swallowing, limbic movement), and their effect on our auditory perception.

Research

During my time at the University of Miami, I served as a Teaching Assistant for the CHAI (Concerts with Humans and Artificial Intelligence) Ensemble—a $100K U‑LINK–funded initiative exploring the interplay between human creativity and AI in music. In this role, I facilitated instruction at both the undergraduate level (supporting Frost students in applying AI production tools) and at the middle- and high-school level, delivering summer curriculum via the MusicReach program that introduced students to both conventional and AI-powered music technologies.

Our work culminated in a public CHAI Showcase concert in April 2025, where performances featuring AI-human collaborations were presented alongside recorded demonstrations of our creative process. Additionally, I co‑presented our pedagogical and research findings with graduate student Lívia de Moraes, showcasing instructors’ experiences using AI tools in music education at the Association for Popular Music Education (APME) conference in June 2025.

music_tech_ai_curricula_chai_v1p3.jpg

Pilot Study on Loudness Perception

page1METHODS_PAPAER_AMANDA_PASLER_MMI705-2.jpg

For one of my graduate research projects, I conducted a study investigating how perceived loudness varies across age, gender, and history of noise exposure. Participants rated the loudness of different stimuli—including sine tones, speech, and environmental “walla” noise—across multiple intensity levels. While my statistical analysis in SPSS was not applied correctly, the raw data still suggested interesting patterns, particularly a potential difference in loudness perception between women and men. However, the sample size was not large enough to establish a definitive trend. 

Through this project, I gained hands-on experience in study design, participant testing, and data collection, while also learning the importance of applying appropriate statistical methods and validating results. Even with its limitations, this study deepened my understanding of how demographic factors can shape auditory perception and prepared me to approach future research with stronger methodological rigor.

Engineering Projects

I am the co-founder of Vocal Unity AI, an audio technology startup developed with Spencer Soule. This project launched after we were awarded a USTAAR (University of Miami Student Accelerator for Academic and Artistic Research) grant, a competitive initiative supported by the Alvarez Fund and led by Dr. Suhrud Rajguru. USTAAR fosters multidisciplinary innovation by providing student teams with seed funding, mentorship, and access to commercialization resources.  With this support, Vocalunity AI is now in active development. Our project addresses one of the most common frustrations in vocal production: inconsistencies between takes recorded on different days or under different conditions. Variations in microphone choice, room acoustics, time of day, or the vocalist’s condition often lead to mismatched timbres, requiring time-consuming post-production.

Investor Pitch Deckpage1.jpg
Slide1.jpeg

Vocal Unity AI provides a solution through a machine learning plugin that automatically aligns vocal timbre across recording sessions, ensuring consistent sound quality for both professional studios and home producers. Our target users include producers, independent artists, and content creators, with future applications in live performance, podcasts, voiceovers, and audiobooks. The roadmap also includes advanced features such as automatic harmony generation and pitch correction. By lowering technical barriers, Vocalunity AI empowers creators to save time, streamline production, and stay focused on creativity—embodying the very goals of USTAAR by transforming academic research and student innovation into a viable startup venture.

As part of my undergraduate studies in Music Engineering, I completed a series of DSP projects that explored both the technical foundations and creative applications of audio signal processing. These included designing custom plugins such as parametric EQs, gain controllers, and filters, building effects like long delay lines and dynamic range compressors, and developing more experimental tools like the Flex Filter Pro, a modulation-based filter with extensive user control. I also created specialized instruments and processors, such as the Yoshynth, a monophonic synth with adjustable filters and waveform options, and the VoxCddr, a streamlined one-knob vocoder designed for live performance ease. Additionally, I designed the Harmony Helper interface, which leveraged AI-driven harmonic generation to support user creativity in real time. Across these projects, I gained hands-on experience implementing DSP algorithms, mapping them to user-friendly interfaces, and refining them through user testing and feedback.

YOSHYNTH – A Playful, Full-Control Synthesizer

Filter Flex - All Inclusive Modulator Plug In

VOXCDDR - Single Knob Vocoder Controller

All plugins available for free download by email request.

Hardware Projects & Designs

cable_tester_1.png

I assembled a multi-purpose audio utility combining a cable tester and a modified DI box. The tester uses LEDs and switches to quickly diagnose faults in XLR and TRS/TS cables, while the DI box modification improves impedance matching and provides cleaner, balanced outputs. This project strengthened my hands-on electronics skills and applied audio engineering design. 

cable_tester_final_build.png

For my Transducers course final, I designed and built a pair of passive computer speakers from scratch. The project covered the full design process, including specifications, component selection, filter design, and frequency response testing. By integrating drivers, crossover filtering, and enclosure design, I created a functioning speaker system capable of reproducing computer audio with clarity. The final product not only met performance goals but also showcased hands-on application of transducer theory, acoustics, and circuit design in a practical audio system

Slide1.jpeg
image.png

For an electronics class final project, I designed and simulated a single-stage Class-A amplifier using a 2N2222 BJT in PSpice. The amplifier was built to drive an 8 Ω speaker load with performance requirements including a peak load current >120 mA before clipping, proper biasing across a β range of 100–300, and power consumption constraints (<0.6 W for the transistor, <3 W for resistors). I implemented DC sweep, transient, and frequency response simulations to evaluate bias conditions, voltage gain, and -3 dB cutoff frequencies, ensuring stability and linearity. The project reinforced my skills in analog circuit design, simulation, and performance analysis

Let's Connect!

Thanks for reaching out! I'll be in touch soon!

bottom of page