Optimized model architectures for deep learning on genomic data | Communications Biology – Nature.com

Hyperparameter search space

The hyperparameter space used for optimization is listed in Table1 and described in more detail here.

The first part of the model constructed by GenomeNet-Architect consists of a sequence of convolutional blocks (Fig.1), each of which consists of convolutional layers. The number of blocks (Ncb) and the number of layers in each block (scb) is determined by the HPs ncb and nc in the following way: Ncb is directly set to ncb unless nc (which relates to the total number of convolutional layers) is less than that. Their relation is therefore

$${N}_{{cb}}=left{begin{array}{cc}{n}_{c},&{{if} , {n}_{c}le {n}_{{cb}}}\ {n}_{{cb}}, &{otherwise}end{array}right.$$

scb is calculated by rounding the ratio of the nc hyperparameter to the actual number of convolutional blocks Ncb:

$${s}_{{cb}}={round}left(frac{{n}_{c}}{{N}_{{cb}}}right).$$

This results in nc determining the approximate total number of convolutional layers while satisfying the constraint that each convolutional block has the same (integer) number of layers. The total number of convolutional layers is then given by

$${N}_{c}={N}_{{cb}}times {s}_{{cb}}.$$

f0 and fend determine the number of filters in the first or last convolutional layers, respectively. The number of filters in intermediate layers is interpolated exponentially. If residual blocks are used, the number of filters within each convolutional block needs to be the same, in which case the number of filters changes block-wise. Otherwise, each convolutional layer can have a different number of filters. If there is only one convolutional layer, f0 is used as the number of filters in this layer. Thus, the number of filters for the ith convolutional layer is:

$${f}_{i}=leftlceil {f}_{0}times {left(frac{{f}_{{end}}}{{f}_{0}}right)}^{jleft(iright)}rightrceil,,jleft(iright)=left{begin{array}{cc}leftlfloor frac{i}{{s}_{{cb}}}rightrfloor times frac{1}{{N}_{{cb}}-1}, & {if} , res_block\ frac{i}{{N}_{c}-1}, & {otherwise}end{array}right..$$

The kernel size of the convolutional layers is also exponentially interpolated between k0 and kend. If the model has only one convolutional layer, the kernel size is set to k0. The kernel size of the convolutional layer i is:

$${k}_{i}=leftlceil{k}_{0}times {left(frac{{k}_{{end}}}{{k}_{0}}right)}^{frac{i}{{N}_{c}-1}}rightrceil.$$

The convolutional layers can use dilated convolutions, where the dilation factor increases exponentially from 1 to dend within each convolutional block. Using rem as the remainder operation, the dilation factor for each layer is then:

$${d}_{i}=leftlceil{d}_{{end}}^{,left(leftlfloor i,{{{{{boldsymbol{rem}}}}}},{s}_{{cb}}rightrfloor right)/left({s}_{{cb}}-1right)}rightrceil.$$

We apply max-pooling after convolutional layers, depending on the total max-pooling factor pend. Max pooling layers of stride and a kernel size of 2 or the power of 2 are inserted between convolutional layers so that the sequence length is reduced exponentially along the model. pend represents the approximate value of total reduction in the sequence length before the output of the convolutional part is fed into the last GAP layer or into the RNN layers depending on the model type.

For CNN-GAP, outputs from multiple convolutional blocks can be pooled, concatenated, and fed into a fully connected network. Out of Ncb outputs, the last min(1, (1 rs) Ncb) of them are fed into global average pooling layers, where rs is the skip ratio hyperparameter.

GenomeNet-Architect uses the mlrMBO software38 with a Gaussian process model from the DiceKriging R package39 configured with a Matrn-3/2 kernel40 for optimization. It uses the UCB31 infill criterion, sampling from an exponential distribution as a batch proposal method32. In our experiment, we proposed three different configurations simultaneously in each iteration.

For both tasks, we trained the proposed model configurations for a given amount of time and then evaluated them afterwards on the validation set. For each architecture (CNN-GAP and CNN-RNN) and for each sequence length of the viral classification task (150nt and 10,000nt), the best-performing model configuration found within the optimization setting (2h, 6h) was saved and considered for further evaluation. For the pathogenicity detection task, we only evaluated the 2h optimization. For each task and sequence length value, the first t = t1 (2h) optimization evaluated a total of 788 configurations, parallelized on 24 GPUs, and ran for 2.8 days (wall time). For the viral classification task, the warm-started t = t2 (6h) optimization evaluated 408 more configurations and ran for 7.0 days for each sequence length value.

During HPO, the number of samples between model validation evaluations was set dynamically, depending on the time taken for a single model training step. It was chosen so that approximately 20 validation evaluations were performed for each model in the first phase (t = 2h), and approximately 100 validation evaluations were performed in the second phase (t = 6 hours). In the first phase, the highest validation accuracy found during model training was used as the objective value to be optimized. In the second phase, the second-highest validation accuracy found in the last 20 validation evaluations was used as the objective value. This was done to avoid rewarding models with a very noisy training process with performance outliers.

The batch size of each model architecture is chosen to be as large as possible while still fitting into GPU memory. To do this, GenomeNet-Architect performs a binary search to find the largest model that still fits in the GPU and subtracts a 10% safety margin to avoid potential training failures.

For the viral classification task, the training and validation samples are generated by randomly sampling FASTA genome files and splitting them into disjoint consecutive subsequences from a random starting point. A batch size that is a multiple of 3 (the number of target classes) is used, and each batch contains the same number of samples from each class. Since we work with datasets that have different quantities of data for each class, this effectively oversamples the minor classes compared to the largest class. The validation set performance was evaluated at regular intervals after training on a predetermined number of samples, set to 6,000,000 for the 150 nt models and 600,000 for the 10,000 nt models. The evaluation used a subsample of the validation set equal to 50% of the training samples seen between each validation. During the model training, the typical batch size was 1200 for the 150 nt models, and either 120, 60, or 30 for the 10,000 nt models. Unlike during training and validation, the test set samples were not randomly generated by selecting random FASTA files. Instead, test samples were generated by iterating through all individual files, and using consecutive subsequences starting from the first position. For the pathogenicity detection task, the validation performance was evaluated at regular intervals on the complete set, specifically once after training on 5,000,000 samples. The batch size of 1000 was used for all models, except for GAP-RNN, as it was not possible with the memory of our GPU. For this model, a batch size of 500 was used.

For both tasks, we chose a learning rate schedule that automatically reduced the learning rate by half if the balanced accuracy did not increase for 3 consecutive evaluations on the validation set. We stopped the training when the balanced accuracy did not increase for 10 consecutive evaluations. This typically corresponds to stopping the training after 40/50 evaluations for the 150 nt models, 25/35 evaluations for the 10,000 nt models for the viral classification tasks, and 5/15 evaluations for the pathogenicity detection task.

To evaluate the performance of the architectures and HP configurations, the models proposed by GenomeNet-Architect were trained until convergence on the training set; convergence was checked on the validation set. The resulting models were then evaluated on a test set that was not seen during optimization.

For the viral classification task, we downloaded all complete bacterial and viral genomes from GeneBank and RefSeq using the genome updater script (https://github.com/pirovc/genome_updater) on 04-11-2020 with the arguments -d genbank,refseq -g bacteria/viral -c all and -l Complete Genome. To filter out possible contamination consisting of plasmids and bacteriophages, we removed all genomes from the bacteria set with more than one chromosome. Filtering out plasmids due to their inconsistent and poor annotations in databases avoids introducing substantial noise in sequence and annotation since they can be incorrectly included or excluded in genomes. We used the taxonomic metadata to split the viral set into eukaryotic or prokaryotic viruses. Overall this resulted in three subgroups: bacteria, prokaryotic bacteriophages, and eukaryotic viruses (referred to as non-phage viruses, Table2). To assess the models generalization performance, we subset the genomes into training, validation, and test subsets. We used the date of publishing metadata to split the data by publication time, with the training data consisting mostly of genomes published before 2020, and the validation and test data consisting of more recently published genomes. Thus, when applied to newly sequenced DNA, the classification performance of the models on yet unknown data is estimated. For smaller datasets, using average nucleotide identity information (ANI) generated with tools such as Mashtree41 to perform the splits can alternatively be used to avoid overlap between training and test data.

The training data was used for model fitting, the validation data was used to estimate generalization performance during HPO and to check for convergence during final model training, and the test data was used to compare final model performance and draw conclusions. The test data was not seen by the optimization process. The training, validation and test sets represent approximately 70%, 20%, and 10% of the total data, respectively.

The number of FASTA files in the sets and the number of non-overlapping samples in sets of the viral classification task are listed in Table2. Listed is the number of different non-overlapping sequences that could theoretically be extracted from the datasets, were they split into consecutive subsequences. However, whenever the training process reads a file again, e.g. in a different epoch, the starting point of the sequence to be sampled is randomized, resulting in a much larger number of possible distinct (though overlapping) samples. Because the size of the test set is imbalanced, we report class-balanced measures, i.e. measures calculated for each class individually and then averaged over all classes.

For the pathogenicity classification task, we downloaded the dataset from https://zenodo.org/records/367856313. Specifically, the used training files are nonpathogenic_train.fasta.gz, pathogenic_train.fasta.gz, the used validation files are pathogenic_val.fasta.gz, nonpathogenic_val.fasta.gz, and the used test files are nonpathogenic_test_1.fasta.gz, nonpathogenic_test_2.fasta.gz, pathogenic_test_1.fasta.gz, pathogenic_test_2.fasta.gz.

Further information on research design is available in theNature Portfolio Reporting Summary linked to this article.

Go here to see the original:
Optimized model architectures for deep learning on genomic data | Communications Biology - Nature.com

Related Posts

Comments are closed.