Contributors

The MELT-Mixtral-8x7B-Instruct-v0.1 Large Language Model (LLM) is a generative text model pre-trained and fine-tuned using publically available medical data. As of now, our model is 6% more accurate than Google’s 540 billion parameters Med-Palm, which is 10X larger. MELT is intended for research purposes only. MELT models are best suited for prompts using a QA or chat format.

The Medical Education Language Transformer (MELT) models have been trained on a wide range of text, chat, Q/A, and instruction data in the medical domain.

While the model was evaluated using publically available USMLE, Indian AIIMS, and NEET example questions, its use is intended to be more broadly applicable.

MELT was trained using publicly available collections, which likely contain biased and inaccurate information. The training and evaluation datasets have not been inspected for content or accuracy.

Dataset

MELT-Mixtral-8x7B-Instruct-v0.1 is 68.2% accurate across 3 USMLE, Indian AIIMS, and NEET medical examination benchmarks, surpassing the pass mark (>60%) in the U.S. Medical Licensing Examination (USMLE) style questions.

Model Description

  • Developed by: Center for Applied AI
  • Funded by: Institute of Biomedical Informatics
  • Model type: LLM
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Finetuned from the model: Mixtral-8x7B-Instruct-v0.1

https://huggingface.co/IBI-CAAI/MELT-Mixtral-8x7B-Instruct-v0.1

Tags: