This document is intended to step through the process followed in these estimates of RT error rates. This R package is a companion to the similarly named ‘errrt’ Perl package, which is responsible for preprocessing the raw sequencing data.
An outline of the experimental process is represented by:
Experimental Design
The errrt package is responsible for the bottom and right side of the image. It takes the raw fastq data, merges the reads using flash, writes out the index and identifier for all reads identical to the template, writes out the sequences for the ones not identical to the template, aligns them with fasta36 against the template, interprets the fasta36 output, and writes a table of the mismatches, insertions, deletions from it.
This package, creatively named ‘errRt’ takes a sample sheet describing the samples, reads the tables of identical reads/mismatches, and creates matrices of mutation rates for various classes of mutations.
The perl package may be installed via:
The R package may be installed via:
I split up the preprocessing into discrete steps so that I could check on each phase and try to ensure correctness. The errrt Perl package was written so that pretty much everything may be set via command line parameters, but all the defaults were written to suite this data set. Thus I have a pretty simple bash for loop which handled the work of preprocessing the data.
The following fragment of bash was run inside my directory preprocessing/, which contained 3 subdirectories: s1/, s2/, and s3/ which contain the raw reads for the 3 samples. The excel spreadsheet in inst/extdata/all_samples.xlsx describes them, but s1 is the control, s2 has low magnesium, and s3 has high.
#!/usr/bin/env bash
## The default options should take care of most of what these require.
start=$(pwd)
for sample in $(/bin/ls -d s*); do
cd ${sample}
## Merge the forward and reverse reads.
../../errrt/script/merge_reads.pl --read1 *_1.fastq.gz --read2 *_2.fastq.gz
## Extract the reads with the template sequence into a fasta file.
## This assumes input named 'step1.extendedFrags.fastq.xz' and output named 'step2.fasta'
## This output is not compressed because it is easier to invoke fasta36 on uncompressed data.
## Thus it defaults to step2.fasta as output.
../../errrt/script/extract_fasta.pl
## Run fasta against the template sequence.
## This assumes step2.fasta as input and provides step3.txt.xz as output.
../../errrt/script/run_fasta.pl
## Boil the fasta output into something we can read.
## This assumes step3.txt.xz as input and step4.txt.xz as output.
../../errrt/script/parse_fasta.pl
cd ${start}
doneAs the above shell fragment states, the final outputs of errrt are step4.txt.xz and step2_identical_reads.txt.xz. These filenames are therefore recorded in the sample sheet in the columns ‘ident_table’ and ‘mutation_table’ for each sample.
I decided for the moment at least to copy these files into inst/extdata and modify the sample sheet in inst/extdata to refer to them.
I wrapped the various steps performed by errRt into a primary main function, create_matrices(). The following is the invocation I used for this data, along with some explanation of what is happening.
library(errRt)
basedir <- system.file("extdata", package="errRt")
metadata_file <- file.path(basedir, "all_samples.xlsx")
metadata_file## [1] "/tmp/RtmpPGuTJL/temp_libpath9dc2c624a249c/errRt/extdata/all_samples.xlsx"
Here is the resulting experimental design. The create_matrices() function will use this to read the tables of identical reads and mutated reads.
| sampleid | condition | batch | identtable | mutationtable | |
|---|---|---|---|---|---|
| s1 | s1 | control | b201910 | s1/step2_identical_reads.txt.xz | s1/step4.txt.xz |
| s2 | s2 | low | b201910 | s2/step2_identical_reads.txt.xz | s2/step4.txt.xz |
| s3 | s3 | high | b201910 | s3/step2_identical_reads.txt.xz | s3/step4.txt.xz |
The create_matrices() function has a few options to adjust the stringency of the various ways to define an error. It functions to count the various error types with quantify_parsed(), gather them into matrices by sample along with the perceived errors by the sequencer, and calculate aggregate error rates for the various classes of error. In other words, it does pretty much everything in one function call.
error_data <- create_matrices(sample_sheet=metadata_file, ident_column="identtable",
mut_column="mutationtable", min_reads=3, min_indexes=3,
min_sequencer=10, min_position=24, prune_n=TRUE,
verbose=TRUE)## Starting sample: 1.
## Reading the file containing mutations: s1/step4.txt.xz
## Reading the file containing the identical reads: s1/step2_identical_reads.txt.xz
## Removing any differences before position: 24.
## Before pruning, there are: 1156535 reads.
## After position pruning, there are: 1037310 reads: 90%.
## Removing any reads with 'N' as the hit.
## Before N pruning, there are: 1037310 reads.
## After N pruning, there are: 1021066 reads: 98%.
## Gathering information about the indexes observed, this is slow.
## Before read pruning, there are: 1742743 indexes.
## After read pruning, there are: 838158 indexes.
## Removing indexes with fewer than 3 indexes.
## Before index pruning, there are: 1021066 changed reads.
## Before index pruning, there are: 4681501 identical reads.
## After index pruning, there are: 532571 changed reads: 52%.
## After index pruning, there are: 3663814 identical reads: 78%.
## Gathering identical, mutant, and sequencer reads/indexes.
## Counting by direction.
## Counting by string.
## Subsetting based on mutations with at least 3 indexes.
## Counting by reference position.
## Counting by identity string.
## Counting by reference nucleotide.
## Counting by product nucleotide.
## Counting by mutation type.
## Counting by transitions/transversion.
## Counting strong/weak.
## Counting insertions by position.
## Counting insertions by nucleotide
## Counting deletions by position.
## Counting deletions by nucleotide.
## Starting sample: 2.
## Reading the file containing mutations: s2/step4.txt.xz
## Reading the file containing the identical reads: s2/step2_identical_reads.txt.xz
## Removing any differences before position: 24.
## Before pruning, there are: 3421203 reads.
## After position pruning, there are: 1758479 reads: 51%.
## Removing any reads with 'N' as the hit.
## Before N pruning, there are: 1758479 reads.
## After N pruning, there are: 1732605 reads: 99%.
## Gathering information about the indexes observed, this is slow.
## Before read pruning, there are: 1263563 indexes.
## After read pruning, there are: 694479 indexes.
## Removing indexes with fewer than 3 indexes.
## Before index pruning, there are: 1732605 changed reads.
## Before index pruning, there are: 5230976 identical reads.
## After index pruning, there are: 874781 changed reads: 50%.
## After index pruning, there are: 4834367 identical reads: 92%.
## Gathering identical, mutant, and sequencer reads/indexes.
## Counting by direction.
## Counting by string.
## Subsetting based on mutations with at least 3 indexes.
## Counting by reference position.
## Counting by identity string.
## Counting by reference nucleotide.
## Counting by product nucleotide.
## Counting by mutation type.
## Counting by transitions/transversion.
## Counting strong/weak.
## Counting insertions by position.
## Counting insertions by nucleotide
## Counting deletions by position.
## Counting deletions by nucleotide.
## Starting sample: 3.
## Reading the file containing mutations: s3/step4.txt.xz
## Reading the file containing the identical reads: s3/step2_identical_reads.txt.xz
## Removing any differences before position: 24.
## Before pruning, there are: 4309681 reads.
## After position pruning, there are: 1564155 reads: 36%.
## Removing any reads with 'N' as the hit.
## Before N pruning, there are: 1564155 reads.
## After N pruning, there are: 1532016 reads: 98%.
## Gathering information about the indexes observed, this is slow.
## Before read pruning, there are: 887982 indexes.
## After read pruning, there are: 465046 indexes.
## Removing indexes with fewer than 3 indexes.
## Before index pruning, there are: 1532016 changed reads.
## Before index pruning, there are: 3583390 identical reads.
## After index pruning, there are: 783358 changed reads: 51%.
## After index pruning, there are: 3332425 identical reads: 93%.
## Gathering identical, mutant, and sequencer reads/indexes.
## Counting by direction.
## Counting by string.
## Subsetting based on mutations with at least 3 indexes.
## Counting by reference position.
## Counting by identity string.
## Counting by reference nucleotide.
## Counting by product nucleotide.
## Counting by mutation type.
## Counting by transitions/transversion.
## Counting strong/weak.
## Counting insertions by position.
## Counting insertions by nucleotide
## Counting deletions by position.
## Counting deletions by nucleotide.
## Working on miss_reads_by_position.
## Working on miss_indexes_by_position.
## Working on miss_sequencer_by_position.
## Working on miss_reads_by_string.
## Warning in order(as.numeric(rownames(matrices[[t]]))): NAs introduced by
## coercion
## Working on miss_indexes_by_string.
## Warning in order(as.numeric(rownames(matrices[[t]]))): NAs introduced by
## coercion
## Working on miss_sequencer_by_string.
## Warning in order(as.numeric(rownames(matrices[[t]]))): NAs introduced by
## coercion
## Working on miss_reads_by_ref_nt.
## Working on miss_indexes_by_ref_nt.
## Working on miss_sequencer_by_ref_nt.
## Working on miss_reads_by_hit_nt.
## Working on miss_indexes_by_hit_nt.
## Working on miss_sequencer_by_hit_nt.
## Working on miss_reads_by_type.
## Working on miss_indexes_by_type.
## Working on miss_sequencer_by_type.
## Working on miss_reads_by_trans.
## Working on miss_indexes_by_trans.
## Working on miss_sequencer_by_trans.
## Working on miss_reads_by_strength.
## Working on miss_indexes_by_strength.
## Working on miss_sequencer_by_strength.
## Working on insert_reads_by_position.
## Working on insert_indexes_by_position.
## Working on insert_sequencer_by_position.
## Working on insert_reads_by_nt.
## Working on insert_indexes_by_nt.
## Working on insert_sequencer_by_nt.
## Working on delete_reads_by_position.
## Working on delete_indexes_by_position.
## Working on delete_sequencer_by_position.
## Working on delete_reads_by_nt.
## Working on delete_indexes_by_nt.
## Working on delete_sequencer_by_nt.
## Skipping table: delete_reads_by_position
## Skipping table: delete_indexes_by_position
## Skipping table: delete_sequencer_by_position
## Skipping table: delete_reads_by_nt
## Skipping table: delete_indexes_by_nt
## Skipping table: delete_sequencer_by_nt
The options are explained in the help documentation, but here is an overview of the ones which might not be self-evident.
create_matrices() works in 3 main stages:
While create_matrices() is running, it should print some information about what it is doing and give some idea about how much data is lost due to the stringency parameters. Its result is a list:
Thus, as one might imagine, there are a lot of matrices returned, here are some examples of the data after normalization, in each case I will first print the data for the RT mutations followed by the sequencer:
gplots::heatmap.2(error_data[["normalized_by_counts"]][["miss_indexes_by_position"]],
trace="none",
Rowv=FALSE)## Warning in gplots::heatmap.2(error_data[["normalized_by_counts"]]
## [["miss_indexes_by_position"]], : Discrepancy: Rowv is FALSE, while dendrogram
## is `both'. Omitting row dendogram.
gplots::heatmap.2(error_data[["normalized_by_counts"]][["miss_sequencer_by_position"]],
trace="none",
Rowv=FALSE)## Warning in gplots::heatmap.2(error_data[["normalized_by_counts"]]
## [["miss_sequencer_by_position"]], : Discrepancy: Rowv is FALSE, while dendrogram
## is `both'. Omitting row dendogram.
| s1 | s2 | s3 | |
|---|---|---|---|
| A | 0.0965 | 0.2340 | 0.5623 |
| C | 0.6078 | 0.8293 | 0.4911 |
| G | 0.8937 | 0.3998 | 0.3085 |
| T | 0.0971 | 0.1681 | 0.2422 |
| s1 | s2 | s3 | |
|---|---|---|---|
| A | 0.3294 | 0.3464 | 0.5231 |
| C | 0.3180 | 0.5133 | 0.2713 |
| G | 0.3688 | 0.2011 | 0.2530 |
| T | 0.3956 | 0.5071 | 0.7605 |
| s1 | s2 | s3 | |
|---|---|---|---|
| A | 0.4794 | 0.7054 | 0.6678 |
| C | 0.1254 | 0.1721 | 0.1012 |
| G | 0.1745 | 0.2213 | 0.4140 |
| T | 0.5130 | 0.3522 | 0.6819 |
| s1 | s2 | s3 | |
|---|---|---|---|
| A | 0.2307 | 0.2967 | 0.4339 |
| C | 0.4693 | 0.6286 | 0.3616 |
| G | 0.6785 | 0.4078 | 0.4838 |
| T | 0.1971 | 0.2302 | 0.3422 |
| s1 | s2 | s3 | |
|---|---|---|---|
| A_C | 0.0553 | 0.0404 | 0.0433 |
| A_G | 0.0376 | 0.1609 | 0.2962 |
| A_T | 0.0181 | 0.0362 | 0.0419 |
| C_A | 0.3978 | 0.2640 | 0.1602 |
| C_G | 0.0176 | 0.0416 | 0.0557 |
| C_T | 0.1644 | 0.2914 | 0.5133 |
| G_A | 0.0720 | 0.2747 | 0.1711 |
| G_C | 0.0146 | 0.0182 | 0.0171 |
| G_T | 0.7420 | 0.1984 | 0.1267 |
| T_A | 0.0095 | 0.0458 | 0.0392 |
| T_C | 0.0441 | 0.0483 | 0.0528 |
| T_G | 0.0921 | 0.0964 | 0.0927 |
| s1 | s2 | s3 | |
|---|---|---|---|
| A_C | 0.2403 | 0.1817 | 0.1850 |
| A_G | 0.0803 | 0.0892 | 0.0884 |
| A_T | 0.0407 | 0.0565 | 0.0576 |
| C_A | 0.1290 | 0.1515 | 0.1462 |
| C_G | 0.0732 | 0.0728 | 0.0661 |
| C_T | 0.1331 | 0.1315 | 0.1267 |
| G_A | 0.0560 | 0.0518 | 0.0526 |
| G_C | 0.0580 | 0.0759 | 0.0838 |
| G_T | 0.1813 | 0.1558 | 0.1579 |
| T_A | 0.0457 | 0.0425 | 0.0419 |
| T_C | 0.1214 | 0.1257 | 0.1293 |
| T_G | 0.4494 | 0.4930 | 0.4918 |
| s1 | s2 | s3 | |
|---|---|---|---|
| transition | 0.2309 | 1.0989 | 0.8993 |
| transversion | 1.1613 | 0.5834 | 0.8074 |
| s1 | s2 | s3 | |
|---|---|---|---|
| transition | 0.297 | 0.5457 | 0.366 |
| transversion | 1.081 | 0.8903 | 1.604 |
| s1 | s2 | s3 | |
|---|---|---|---|
| strong_strong | 0.0267 | 0.0598 | 0.1087 |
| strong_weak | 1.1740 | 1.4607 | 0.6864 |
| weak_strong | 0.3137 | 0.2672 | 0.4634 |
| weak_weak | 0.0195 | 0.0795 | 0.1125 |
| s1 | s2 | s3 | |
|---|---|---|---|
| strong_strong | 0.1087 | 0.1487 | 0.2238 |
| strong_weak | 0.4338 | 0.6537 | 0.3568 |
| weak_strong | 1.1836 | 0.6333 | 0.7703 |
| weak_weak | 0.0683 | 0.0892 | 0.1332 |
| s2 | s3 | |
|---|---|---|
| 30 | 0.0332 | 0.0226 |
| 46 | 0.0000 | 0.0076 |
| 48 | 0.2105 | 0.2173 |
| 51 | 0.4135 | 0.6765 |
| 52 | 0.0000 | 0.0339 |
| 55 | 0.0000 | 0.0302 |
| 72 | 0.0000 | 0.1101 |
| 81 | 0.0000 | 0.0151 |
| 91 | 0.0886 | 0.2032 |
| 92 | 0.3308 | 0.0661 |
| 93 | 0.0000 | 0.0169 |
| 97 | 0.6616 | 0.1568 |
| 100 | 0.0332 | 0.0000 |
| 123 | 0.0000 | 0.0454 |
| 142 | 0.0000 | 0.0198 |
| 144 | 0.0000 | 0.0170 |
| 148 | 0.0000 | 0.0113 |
| 151 | 0.0662 | 0.0000 |
| 154 | 0.0886 | 0.0000 |
| s2 | s3 | |
|---|---|---|
| 29 | 0.1440 | 0.1129 |
| 34 | 0.0000 | 0.1265 |
| 35 | 0.1080 | 0.0000 |
| 42 | 0.0000 | 0.1687 |
| 46 | 0.0000 | 0.0847 |
| 48 | 0.2688 | 0.2108 |
| 51 | 0.1080 | 0.1412 |
| 55 | 0.0000 | 0.1265 |
| 66 | 0.1440 | 0.0000 |
| 92 | 0.8601 | 0.5903 |
| 97 | 0.0000 | 0.0847 |
| 149 | 0.0000 | 0.1687 |
| 151 | 0.0000 | 0.0847 |
| 156 | 0.2688 | 0.0000 |
| s2 | s3 | |
|---|---|---|
| A | 0.2769 | 0.7219 |
| C | 0.3804 | 0.1975 |
| G | 0.3434 | 0.1682 |
| T | 0.8436 | 0.6237 |
| s2 | s3 | |
|---|---|---|
| A | 0.108 | 0.2259 |
| C | 1.236 | 1.0119 |
| G | 0.504 | 0.4517 |
| T | 0.000 | 0.1265 |
There was insufficient data to quantify these.