A subscription to JoVE is required to view this content. Sign in or start your free trial.

In This Article

  • Summary
  • Abstract
  • Introduction
  • Protocol
  • Results
  • Discussion
  • Disclosures
  • Acknowledgements
  • Materials
  • References
  • Reprints and Permissions

Summary

This protocol provides a method for tracking automated eye squint in rodents over time in a manner compatible with time-locking to neurophysiological measures. This protocol is expected to be useful to researchers studying mechanisms of pain disorders such as migraine.

Abstract

Spontaneous pain has been challenging to track in real time and quantify in a way that prevents human bias. This is especially true for metrics of head pain, as in disorders such as migraine. Eye squint has emerged as a continuous variable metric that can be measured over time and is effective for predicting pain states in such assays. This paper provides a protocol for the use of DeepLabCut (DLC) to automate and quantify eye squint (Euclidean distance between eyelids) in restrained mice with freely rotating head motions. This protocol enables unbiased quantification of eye squint to be paired with and compared directly against mechanistic measures such as neurophysiology. We provide an assessment of AI training parameters necessary for achieving success as defined by discriminating squint and non-squint periods. We demonstrate an ability to reliably track and differentiate squint in a CGRP-induced migraine-like phenotype at a sub second resolution.

Introduction

Migraine is one of the most prevalent brain disorders worldwide, affecting more than one billion people1. Preclinical mouse models of migraine have emerged as an informative way to study the mechanisms of migraine as these studies can be more easily controlled than human studies, thus enabling causal study of migraine-related behavior2. Such models have demonstrated a strong and repeatable phenotypic response to migraine-inducing compounds, such as calcitonin-gene-related peptide (CGRP). The need for robust measurements of migraine-relevant behaviors in rodent models persists, especially those that may be coupled with mechanistic metrics such as imaging and electrophysiological approaches.

Migraine-like brain states have been phenotypically characterized by the presence of light aversion, paw allodynia, facial hyperalgesia to noxious stimuli, and facial grimace3. Such behaviors are measured by total time spent in light (light aversion) and paw or facial touch sensitivity thresholds (paw allodynia and facial hyperalgesia) and are restricted to a single readout over large periods of time (minutes or longer). Migraine-like behaviors can be elicited in animals by dosing with migraine-inducing compounds such as CGRP, mimicking symptoms experienced by human patients with migraine3 (i.e., demonstrating face validity). Such compounds also produce migraine symptoms when administered in humans, demonstrating the constructΒ validity of these models4. Studies in which behavioral phenotypes were attenuated pharmacologically have led to discoveries related to the treatment of migraine and provide further substantiation of these models (i.e., demonstrating predictive validity)5,6.

For example, a monoclonal anti-CGRP antibody (ALD405) was shown to reduce light-aversive behavior5 and facial grimace in mice6 treated with CGRP, and other studies have demonstrated that CGRP antagonist drugs reduce nitrous oxide-induced migraine-like behaviors in animals7,8. Recent clinical trials have shown success in treating migraine by blocking CGRP9,10 leading to multiple FDA-approved drugs targeting CGRP or its receptor. Preclinical assessment of migraine-related phenotypes has led to breakthroughs in clinical findings and is, therefore, essential to understanding some of the more complex aspects of migraine that are difficult to directly test in humans.

Despite numerous advantages, experiments using these rodent behavioral readouts of migraine are often restricted in their time point sampling abilities and can be subjective and prone to human experimental error. Many behavioral assays are limited in the ability to capture activity at finer temporal resolutions, often making it difficult to capture more dynamic elements that occur at a sub-second timescale, such as at the level of brain activity. It has proven difficult to quantify the more spontaneous, naturally occurring elements of behavior over time at a meaningful temporal resolution for studying neurophysiological mechanisms. Creating a way to identify migraine-like activity at faster timescales would allow for externally validating migraine-like brain states. This, in turn, could be synchronized with brain activity to create more robust brain activity profiles of migraine.

One such migraine-related phenotype, facial grimace, is utilized across various contexts as a measurement of pain in animals that can be measured instantaneously and tracked over time11. Facial grimace is often used as an indicator of spontaneous pain based on the idea that humans (especially non-verbal humans) and other mammalian species display natural changes in facial expression when experiencing pain11. Studies measuring facial grimace as an indication of pain in mice in the last decade have utilized scales such as the Mouse Grimace Scale (MGS) to standardize the characterization of pain in rodents12. The facial expression variables of the MGS include orbital tightening (squint), nose bulge, cheek bulge, ear position, and whisker change. Even though the MGS has been shown to reliably characterize pain in animals13, it is notoriously subjective and relies on accurate scoring, which can vary across experimenters. Additionally, the MGS is limited in that it utilizes a non-continuous scale and lacks the temporal resolution needed to track naturally occurring behavior across time.

One way to combat this is by objectively quantifying a consistent facial feature. Squint is the most consistently trackable facial feature6. Squint accounts for the majority of the total variability in the data when accounting for all of the MGS variables (squint, nose bulge, cheek bulge, ear position, and whisker change)6. Because squint contributes most to the overall score obtained using the MGS and reliably tracks response to CGRP6,14, it is the most reliable way to track spontaneous pain in migraine mouse models. This makes squint a quantifiable non-homeostatic behavior induced by CGRP. Several labs have used facial expression features, including squint, to represent potential spontaneous pain associated with migraine6,15.

Several challenges have remained regarding carrying out automated squints in a way that can be coupled with mechanistic studies of migraine. For example, it has been difficult to reliably track squint without relying on a fixed position that must be calibrated in the same manner across sessions. Another challenge is the ability to carry out this type of analysis on a continuous scale instead of discrete scales like the MGS. To mitigate these challenges, we aimed to integrate machine learning, in the form of DeepLabCut (DLC), into our data analysis pipeline. DLC is a pose estimation machine learning model developed by Mathis and colleagues and has been applied to a wide range of behaviors16. Using their pose estimation software, we were able to train models that could accurately predict points on a mouse eye at near-human accuracy. This solves the issues of repetitive manual scoring while also drastically increasing temporal resolution. Further, by creating these models, we have made a repeatable means to score squint and estimate migraine-like brain activity over larger experimental groups. Here, we present the development and validation of this method for tracking squint behaviors in a way that can be time-locked to other mechanistic measurements such as neurophysiology. The overarching goal is to catalyze mechanistic studies requiring time-locked squint behaviors in rodent models.

Protocol

NOTE: All animals utilized in these experiments were handled according to protocols approved by the Institutional Animal Care and Use Committee (IACUC) of the University of Iowa.

1. Prepare equipment for data collection

  1. Ensure the availability of all necessary equipment: ensure that the recommended hardware for running DLC has at least 8 GB of memory. See the Table of Materials for information related to hardware and software.
    NOTE: Data can be collected in any format but must be converted to a format readable by DLC before analysis. The most common formats are AVI and MP4.
  2. Configure at least one camera so that one eye of an animal can be detected. If both eyes are visible, perform additional filtering, as it may cause interference in tracking. See section 10 for an example of such filtering for the data provided here.
  3. Install DLC using the package found at Deeplabcut.github.io/DeepLabCut/docs/installation.
  4. In the camera setup, include a single camera at a side angle (~90Β°) to the mouse. To follow this example, sample at 10 Hz, with the mice restrained but free to access their full range of head movements relative to the body. Keep between 2 and 4 inches from the camera to the animal.

2. Setting up DLC

  1. After installing DLC, create the environment to work out of. To do this, navigate to the folder where the DLC software was downloaded using the change directory with the following command.
    cd folder_name
    NOTE: This will be where the DEEPLABCUT.yaml file is located.
  2. Run the first command to create the environment and enable it by typing the second command.
    conda env create -f DEEPLABCUT.yaml
    conda activate Deeplabcut
    NOTE: Ensure the environment is activated before each use of DLC.
    After activating the environment, open the graphical user interface (GUI) with the following command and begin creating the model.
    python -m deeplabcut

3. Create the model

  1. After the GUI is opened, begin creating a model by clicking on Create New Project at the bottom.
  2. Name the project something meaningful and unique to identify it later and enter a name as experimenter. Check the Location section to see where the project will be saved.
  3. Select Browse foldersΒ and find the videos to train the model. Select Copy videos to project folder if the videos are not to be moved from their original directory.
  4. Select Create to generate a new project on the computer.
    NOTE: The videos must cover the full range of the behavior you will observe (i.e., squint, non-squint, and all behaviors in between). The model will only be able to recognize behavior similar to that in the training data, and if some components of the behavior are missing, the model may have trouble recognizing it.

4. Configure the settings

NOTE: This is where details like what points to track, how many frames to extract from each training video, default labeling dot size, and variables relating to how the model will train can be defined.

  1. After creating the model, edit the configuration settings by selecting Edit config.yaml. Select edit to open the configuration settings file to specify key settings relating to the model.
  2. Modify bodyparts to include all parts of the eye to track, then modify numframes2pick to the number of frames needed per training video to get 400 total frames. Lastly, change dotsize to six so that the default size when labeling is small enough to be accurately placed around the edges of the eye.

5. Extract training frames

  1. Following configuration, navigate to the Extract Frames tab at the top of the GUI and select Extract Frames at the bottom right of the page.
  2. Monitor progress using the loading bar at the bottom of the GUI.

6. Label training frames

  1. Navigate to the Label Frames tab in the GUI and select Label Frames. Find the new window that shows folders for each of the selected training videos. Select the first folder, and a new labeling GUI will open.
  2. Label the points defined during configuration for every frame of the selected video. After all frames are labeled, save them and repeat the process for the next video.
  3. For adequate labeling of squint, use two points as close to the largest peak of the eye (center) as possible and indicate the up/down positions for each point. Approximate squint as the average of these two lengths.
    NOTE: When labeling, DLC does not automatically save progress. Periodic saving is recommended to avoid loss of labeled data.

7. Create a training dataset

  1. After manually labeling, navigate to the Train network tab and select Train network to prompt the software to start training the model.
  2. Monitor progress in the command window.

8. Evaluate the network

  1. After network training isΒ complete, navigate to the EvaluateΒ network tab and select Evaluate network. WaitΒ for a few moments until the blue loading circleΒ disappears indicating it has finished self-evaluating and the model is ready for use.

9. Analyze data/generate labeled videos

  1. To analyze videos, navigate to the Analyze videos tab. Select Add more videos and select the videos to be analyzed.
  2. Select Save result(s) as csv if a csv output of the data is sufficient.
  3. When the videos have all been acquired, select Analyze videos at the bottom to begin analysis of the videos.
    NOTE: This step must be completed before generating labeled videos in step 9.5
  4. After the videos have been analyzed, navigate to the Create videos tab and select the analyzed videos.
  5. Select Create videos and the software will begin generating labeled videos that represent the data shown in the corresponding .csv.

10. Process final data

  1. Apply the macros found atΒ https://research-git.uiowa.edu/rainbo-hultman/facial-grimace-dlcΒ to convert raw data into the format used for this analysis (i.e., Euclidian distance).
  2. Import and apply macros labeled Step1 and Step 2 to the csv to filter out all suboptimal data points and convert the data to an averaged Euclidean distance for the centermost points at the top and bottom of the eye.
  3. Run macro called Step3 to mark each point as a 0 no squint and 1 squint based on the threshold value in the script, which is set at 75 pixels.
    NOTE: The parameters for these macros may require adjustment depending on the experimental setup (see discussion). The threshold for squint and the automatic filter for the maximum value of the eye are parameters that may be changed depending on the size of the animal and distance from the camera. You might also adjust the values used for removing suboptimal points depending on how selectively the data have to be filtered.

Results

Here, we provide a method for the reliable detection of squint at high temporal resolution using DeepLabCut. We optimized training parameters, and we provide an evaluation of this method's strengths and weaknesses (Figure 1).

After training our models, we verified that they were able to correctly estimate the top and bottom points of the eyelid (Figure 2), which serve as the coordinate points for the Euclidean distance measure. Eu...

Discussion

This protocol provides an easily accessible in-depth method for using machine-learning-based tools that can differentiate squint at near-human accuracy while maintaining the same (or better) temporal resolution of prior approaches. Primarily, it makes evaluation of automated squint more readily available to a wider audience. Our new method for evaluating automated squint has several improvements compared to previous models. First, it provides a more robust metric than ASM by utilizing fewer points that actually contribut...

Disclosures

We have no conflicts of interest to disclose. The views in this paper are not representative of the VA or The United States Government.

Acknowledgements

Thanks to Rajyashree SenΒ for insightful conversations. Thanks tothe McKnight Foundation Neurobiology of Disease Award (RH), NIH 1DP2MH126377-01 (RH), the Roy J. Carver Charitable Trust (RH), NINDS T32NS007124 (MJ), Ramon D. Buckley Graduate Student Award (MJ), and VA-ORD (RR&D) MERIT 1 I01 RX003523-0 (LS).

Materials

NameCompanyCatalog NumberComments
CUDA toolkit 11.8
cuDNN SDK 8.6.0
Intel computers with Windows 11, 13th genΒ 
LabFaceX 2D Eyelid Tracker Add-on Module for a Free Roaming Mouse:FaceX LLCNAAny camera that can record an animal's eye is sufficient, but this is our eye tracking hardware.
NVIDIA GPU driver that is version 450.80.02 or higher
NVIDIA RTX A5500, 24 GB DDR6NVIDIA[490-BHXV]Any GPU that meets the minimum requirements specified for your version of DLC, currently 8 GB, is sufficient. We used NVIDIA GeForce RTX 3080 Ti GPU
Python 3.9-3.11
TensorFlow version 2.10

References

  1. Disease, G. B. D., Injury, I., Prevalence, C. Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories, 1990-2017: A systematic analysis for the global burden of disease study 2017. Lancet. 392 (10159), 1789-1858 (2018).
  2. Russo, A. F. Cgrp as a neuropeptide in migraine: Lessons from mice. Br J Clin Pharmacol. 80 (3), 403-414 (2015).
  3. Wattiez, A. S., Wang, M., Russo, A. F. Cgrp in animal models of migraine. Handb Exp Pharmacol. 255, 85-107 (2019).
  4. Hansen, J. M., Hauge, A. W., Olesen, J., Ashina, M. Calcitonin gene-related peptide triggers migraine-like attacks in patients with migraine with aura. Cephalalgia. 30 (10), 1179-1186 (2010).
  5. Mason, B. N., et al. Induction of migraine-like photophobic behavior in mice by both peripheral and central cgrp mechanisms. J Neurosci. 37 (1), 204-216 (2017).
  6. Rea, B. J., et al. Peripherally administered cgrp induces spontaneous pain in mice: Implications for migraine. Pain. 159 (11), 2306-2317 (2018).
  7. Kopruszinski, C. M., et al. Prevention of stress- or nitric oxide donor-induced medication overuse headache by a calcitonin gene-related peptide antibody in rodents. Cephalalgia. 37 (6), 560-570 (2017).
  8. Juhasz, G., et al. No-induced migraine attack: Strong increase in plasma calcitonin gene-related peptide (cgrp) concentration and negative correlation with platelet serotonin release. Pain. 106 (3), 461-470 (2003).
  9. Aditya, S., Rattan, A. Advances in cgrp monoclonal antibodies as migraine therapy: A narrative review. Saudi J Med Med Sci. 11 (1), 11-18 (2023).
  10. Goadsby, P. J., et al. A controlled trial of erenumab for episodic migraine. N Engl J Med. 377 (22), 2123-2132 (2017).
  11. Mogil, J. S., Pang, D. S. J., Silva Dutra, G. G., Chambers, C. T. The development and use of facial grimace scales for pain measurement in animals. Neurosci Biobehav Rev. 116, 480-493 (2020).
  12. Whittaker, A. L., Liu, Y., Barker, T. H. Methods used and application of the mouse grimace scale in biomedical research 10 years on: A scoping review. Animals (Basel). 11 (3), 673 (2021).
  13. Langford, D. J., et al. Coding of facial expressions of pain in the laboratory mouse. Nat Methods. 7 (6), 447-449 (2010).
  14. Rea, B. J., et al. Automated detection of squint as a sensitive assay of sex-dependent calcitonin gene-related peptide and amylin-induced pain in mice. Pain. 163 (8), 1511-1519 (2022).
  15. Tuttle, A. H., et al. A deep neural network to assess spontaneous pain from mouse facial expressions. Mol Pain. 14, 1744806918763658 (2018).
  16. Mathis, A., et al. Deeplabcut: Markerless pose estimation of user-defined body parts with deep learning. Nat Neurosci. 21 (9), 1281-1289 (2018).
  17. Wattiez, A. S., et al. Different forms of traumatic brain injuries cause different tactile hypersensitivity profiles. Pain. 162 (4), 1163-1175 (2021).

Reprints and Permissions

Request permission to reuse the text or figures of this JoVE article

Request Permission

Explore More Articles

MedicineAutomated behaviorfacial grimacesquintpainmigrainemouse behavior

This article has been published

Video Coming Soon

JoVE Logo

Privacy

Terms of Use

Policies

Research

Education

ABOUT JoVE

Copyright Β© 2025 MyJoVE Corporation. All rights reserved