Reducing Mental Health Call Center Wait Times: An Operations Research Approach

with Yaren Bilge Kaya

Mental health call centers provide a critical service to people experiencing psychological distress or crisis, but long wait times can significantly decrease their effectiveness. This thesis proposes a data-driven framework to optimize staffing levels for mental health call centers in order to reduce wait times while maintaining service efficiency. Using historical call volume data from 113 Suicide Prevention in the Netherlands and informed by an interview with the United States’ 988 Lifeline, we apply and evaluate two forecasting methods, SARIMA and Prophet, and test for model accuracy. Prophet consistently outperforms SARIMA, achieving up to 50% lower RMSE and MAPE in daily forecasts, and a 25% lower RMSE and MAPE in hourly forecasts one month ahead. These forecasts serve as inputs to two stochastic queueing models developed by Mandelbaum (2009), QED and QED+ED, which estimate the optimal number of servers required to meet caller demand subject to uncertainty in caller behavior. The QED+ED model, which incorporates caller patience and abandonment thresholds, recommends 1 to 2 more servers than the QED model during peak hours, and is therefore more suitable for high-risk environments like mental health support services. The resulting staffing schedules offer a practical and scalable solution for improving responsiveness and reducing abandonment in crisis hotlines. This work demonstrates how operations research methods can enhance public health service delivery by optimizing staffing decisions with patient-centered constraints

Refining Gender Bias Detection Algorithms Using Sentiment Analysis

with Yvon Lu and Sophia Tu

Despite efforts to promote diversity and inclusion, the underrepresentation of women in male-dominated fields such as engineering and finance persists, especially in leadership. The hiring process can play an important role in the recruitment and retention of women at companies. One aspect that has gained attention in recent years is the potential influence of the language used in job descriptions on gender imbalances in applicant pools. As new techniques for analyzing language emerge, we sought to counteract existing biases and build more complex models to guide recruiters. Improving on existing NLP models for gender bias detection (Gaucher et al., 2011), we added the inclusion of tone via sentiment analysis to produce a more nuanced understanding of potentially discriminatory language. Using job descriptions scraped from popular job board sites (Indeed, Monster, Careerbuilder), we obtained over 60,000 data points ranging from a wide variety of industries and career levels. Additionally, we built an RNN model to predict these scores with high accuracy. Post-analysis results on our collected data indicated a statistical significant difference of bias amongst industries, as well as job qualifications. We observed a high negative bias (male leaning) in job descriptions specifying higher educational requirements. Furthermore, we collected job descriptions from the Forbe’s 2022 World’s Top Female Friendly Companies and ran additional PSM testing to determine if a difference in bias was significant. Our refinement of existing algorithms will hopefully introduce new ways to combat discriminatory language that go beyond the base measurement of gender-coded words.