Data collection Data collection
The research group collected 17-dimensional basic trait data (Supplementary Information1. andSupplementary Information 2.) of 25480 samples of community correction prisoners in Zhejiang Province, China, and the corresponding Symptom Checklist-90 (SCL-90) and Health Survey Short Form (SF-12) data. These data were collected through the standardized community correction digital management platform of the Zhejiang Provincial Department of Justice, covering the period from January 2020 to December 2020. The 17-dimensional characteristics mainly include age, sex, treatment level (general control, strict control), whether adult, education level, dmicile (urban or rural), whether there are infectious diseases, whether belongs to the following three categories (unemployed individuals, those without relatives to rely on, individuals without a place to live), whether there is a criminal record, crime type, supervision time, whether there is recidivism, whether there is anti-government tendency, whether there are five kinds of involvement (terrorism, cults, drugs, gangs, and gun trafficking), whether there are four histories (drug use history, escape history, suicide history, police assault history), correction status (in correction, released from the status of correction), occupation before arrest. The SCL-90 traditional scale obtained 9 kinds of psychological measurement indicators: somatization, obsessive-compulsive symptoms, interpersonal sensitivity, depression, anxiety, hostility, terror, paranoia, and psychosis. Due to the incomplete basic information registered in some judicial offices, the samples with missing values in the basic information were removed and matched, resulting in a total of 25,214 sample data.
Due to the privacy and compliance issue of patients, it is difficult to collect a large number of medical data, especially the data of specific groups. The research group has invested a lot of manpower, material and financial resources in the construction of this data set (Supplementary Information 3.).
The research design has been approved by the Ethics Research Committee of the Zhejiang Community Correction Management Bureau. This study was carried out in accordance with the Declaration of Helsinki, and all procedures were carried out in accordance with relevant guidelines and regulations. The Committee waived the requirement of informed consent for this study because the researchers only access the database for analysis purposes, and all personnel, including patient data, are desensitized, and there is no conflict of interest among personnel of each unit.
The pretreatment of tabulated data described in the paper includes missing value imputation, outlier detection and removal and data standardization, as follows:
Missing values refer to situations where the values of certain features or variables in a table are missing or not recorded. In machine learning modeling, handling missing values is crucial36. Choosing appropriate filling methods can improve the predictive performance of the model, making the data more complete and reliable37. In this study, there were some missing values in the raw data we used, and most of the missing values were filled in by manually tracing the raw materials. For a small amount of other missing values such as age and other quantitative data, we use mean interpolation to fill in, as the mean can represent the central trend of the data and help maintain its distribution.For qualitative data such as crime types, we use the median to fill in, which is a better choice because it can reduce the impact of extreme values while maintaining the order and level of the data38.
Outliers refer to data points that are significantly different from other data points or deviate from the normal range. Outliers may have adverse effects on data analysis and modeling, so they need to be eliminated or handled. To ensure the accuracy and reliability of the data, we carried out outlier detection and elimination. We use the Rajda criterion to deal with outliers. The process takes the given confidence probability of 99.7% as the standard, and is based on the standard deviation of 3 times of the data column. The abnormal data row greater than the value is deleted, and when the residual error vb of the measured value xb is greater than 3 times , outliers should be eliminated.
$$left| {vb} right| = left| {xb - x} right| > 3sigma .$$
Data standardization is to transform the data of different scales and ranges into a unified standard scale to eliminate the influence of dimensions and make different features comparable. In the stage of data preprocessing, we normalize the numerical features from minimum to maximum. By linearly mapping the values of each feature to the range of 0 to 1, we eliminate the differences of different feature scales and make them comparable.
Based on symptom checklist-90(SCL-90), this study constructed an adaptive scale (between question groups) simplification screening evaluation model based on multi-label classification algorithm, and used Health Survey Short Form(SF-12), a primary screening tool commonly used by community correction management institutions, as a simplified baseline method for comparative analysis.
We used the multi-label classification model for scale (between question groups) simplification to analyze the risk degree of individuals in nine categories of psychological measurement indicators, and simplified the scale structure based on the risk distribution. The goal of scale simplification is to simplify the questions, make the scale more readable and easy to understand, and help readers get core information and insight more quickly. During the process of scale simplification, it is necessary to make trade-offs and decisions according to the data and the needs of the audience to ensure that enough information is retained while maintaining simplicity and clarity.
The basic principle of the multi-label classification algorithm (as shown in Fig. 1 and Table 1) is to recognize the association between features and labels by learning historical data, so as to predict new labels. It can integrate the results of multiple tags, find the association between multiple tags, and solve the multiple conflicts that may exist in the multi-tag classification problem, so as to effectively improve the accuracy of classification. It can also help us quickly identify features, thus reducing the time of classification.
Binary relevance (first-order, y tags are independent of each other). It is a problem transformation method. The core idea is to decompose the multi-label classification problem. BR is simple and easy to understand. When there is no dependent relationship between Y values, the effect of the model is good.
Classifier chains (high-order, y tags are interdependent). Its principle is similar to the BR conversion method. In this case, the first classifier is trained only on the input data, and then each classifier is trained on all previous classifiers in the input space and chain. A certain number of binary classifiers can be combined into a single multi-label model to explore the correlation between multiple targets.
Rakle (random k-labelsets, high-order, y tags are interdependent). It can divide the original large tag set into a certain number of small tag sets, then use RF to train the corresponding classifier, and finally integrate the prediction results. RakeID is a high-order strategy algorithm, which can mine the correlation of multiple tags according to the size of the tag subset.
Multi label classification algorithm.
For the latter two algorithms, if there is a clear dependency between tags, the generalization ability of the final model is better than that of the model constructed by binary relevance. The problem is that it is difficult to find a more suitable tag dependency.
The core principle of oversampling method is to increase some samples in the category with fewer samples to achieve category balance. SMOTE is the representative algorithm of the oversampling method. In the process of modeling, SMOTE (Synthetic Minority Over-sampling Technique) is used to solve the problem of category imbalance. SMOTE increases the number of minority samples by synthesizing new minority samples, to balance the unbalanced data set.
Because the total number of samples collected is sufficient, the training data adopts 5-fold cross-validation to prevent the model from overfitting and increase the robustness of the model. The extracted feature data is randomly divided into five parts, four of which are used for training, and one part is retained as test data. The above process is repeated five times, using different test data each time. Then the results of these five times are summarized, and the average value is taken as the estimation of the algorithm performance index. Five cross-validation is a popular algorithm choice at present.
In this paper, SF-12 was used as a comparison tool. SF-12 is a commonly used health questionnaire survey tool, which is used to assess the health status and quality of life of individuals. SF-12 is a simplified version derived from the SF-36 questionnaire, which retains the core concepts and dimensions of SF-36. However, it reduces the number of questions and improves the efficiency of questionnaire implementation. The simplicity and efficiency of the SF-12 questionnaire make it a common tool in large-scale epidemiological research and clinical practice. It can be used to evaluate the health status of different groups and the effect of health intervention, and compare the health differences between different groups.
If all SCL-90 subscales of the actual sample are diagnosed as risk-free, the sample is defined as a negative sample. If any subscale test is risky, the sample is defined as a positive sample. Similarly, if all the sub-tags predicted by the multi-label model are 0, the sample is negative. If there is any positive sub-tag, the sample is positive:
If the actual 9 labels are all negative, the mental state is healthy and marked as a negative sample.
If one of the actual 9 labels is positive, the mental state is unhealthy and marked as a positive sample.
Similarly, if all of the predicted 9 tags are negative, the mental state is healthy and the tag is negative.
If one of the predicted 9 tags is positive, the mental state is unhealthy and marked as a positive sample.
According to the actual mental state and the predicted value, the confusion matrix (as shown in Table 2) is drawn, which is composed of the following four important definitions: true positive (TP), false positive (FP), false negative (FN) and true negative (TN).
The overall effect of the model is evaluated by the following indicators, including accuracy, sensitivity, specificity and F1. The relevant measurement standards are as follows:
$${text{Accuracy }} = left( {{text{TP }} + {text{ TN}}} right)/left( {{text{TP }} + {text{ TN }} + {text{ FP }} + {text{ FN}}} right),$$
$${text{Sensitivity }} = {text{ TP}}/left( {{text{TP }} + {text{ FN}}} right),$$
$${text{Precision }} = {text{ TP}}/left( {{text{TP }} + {text{ FP}}} right),$$
$${text{F1}} = {2} times {text{Sensitivity}} times {text{Precision}}/left( {{text{Precision}} + {text{Sensitivity}}} right).$$
In the multi label classification problem, accuracy_Score, Hamming loss and 0-1 loss related evaluation indicators can be based on the prediction results of a single tag or the overall prediction results.
Accuracy_Score is the correctly predicted score (default) or count. In multi-label classification, the function returns the subset precision. If the whole set of predicted tags of the sample matches the real tag combination, the subset accuracy is 1. Otherwise, it is 0.
Hamming loss: Hamming loss measures the prediction accuracy of the model for each label, that is, the ratio of the number of labels with average prediction errors to the total number of labels. It calculates the prediction result of each tag and returns a value between 0 and 1. The smaller the value, the more accurate the prediction is.
0-1 loss is a common classification loss function, which is used to measure the prediction error of the classification model. It takes 1 when the prediction is wrong and 0 when the prediction is correct, so it is named 0-1 loss.
Simplification rate refers to the proportion of the simplified scale to the original scale, which can be used to evaluate the degree of simplification of the scale. Scale simplification refers to simplifying the structure of the original scale by reducing the number of items, deleting redundant or unnecessary items, or merging multiple items. The simplification rate of the scale can be calculated in the following way: simplification rate (number of simplified items/original number of items) 100%. In other words, the simplification rate based on the multi-label model is calculated as follows: simplification rate (the number of sub-labels predicted to be negative)/(the total number of samples).
The Ethics Committee of the Zhejiang Community Correction Management Bureau has waived the informed consent requirement for this study, as researchers accessing the database is only for analytical purposes, including patient data, which is desensitized, and there are no conflicts of interest between personnel in each unit. The research design has been approved by the Ethics Research Committee of the Zhejiang Community Correction Management Bureau. This study was conducted in accordance with the Helsinki Declaration, and all procedures were conducted in accordance with relevant guidelines and regulations.
Continued here:
Research on a machine learning-based adaptive and efficient screening model for psychological symptoms of ... - Nature.com
Read More..