aiberry is

Visionary

Our Patent Pending Multimodal Approach

Our patent pending multimodal approach looks at audio, video, and language signals to offer objective assessments that clinicians can rely on.

Using AI techniques like machine learning (ML) and Natural Language Processing (NLP) to examine these  signals in isolation and as an aggregated pool of data, aiberry provides greater accuracy , augmenting a clinician’s capacity, and accelerating time-to-insight.

multimodal approach

Our Platform Architecture

Our cloud-hosted, end-to-end architecture uses an array of plug-and-play components to extract data from voice and video recordings. The multimodal data is analyzed using early and late fusion ML models. This level of transparency not only helps test and improve our algorithms and corroborate our results, but also provides critical insights into the AI/ML decision making process.

Our Use Cases

Behavioral health related illnesses have reached epidemic proportions, further taxing an already fragile system of care. aiberry offers providers and health systems measurable value across the care continuum.

usecase_a_img_01.png

Screening  in clinical settings

  • Express assessment for depression

  • ER visits

  • Primary care

  • Pre- and post-surgery

usecase_a_img_02.png

Validating clinical observations

  • Verifying clinical determinations

  • Provision of “2nd opinions”

  • Expediting complicated diagnostic paths

usecase_a_img_03.png

Screening in
the field

  • At home

  • In our schools

  • In the workplace

usecase_a_img_04.png

Virtual follow-up via telehealth scenarios

  • Synchronous (video call with patient and provider)

  • Asynchronous (video messages)

usecase_a_img_05.png

Self-screening for consumer use

  • Self-administered assessments

  • Referral mechanisms

  • Triage channels