This document will first explore differentially expressed genes in humans 4 hours after infection followed by the same question in mice.

1 Which genes are DE in human macrophages at 4 hours upon infection with L. major?

2 Gather annotation data

I want to perform a series of comparisons among the host cells: human and mouse. Thus I need to collect annotation data for both species and get the set of orthologs between them.

2.1 Start with the human annotation data

In the following block, I download the human annotations from biomart. In addition, I take a moment to recreate the transcript IDs as observed in the salmon count tables (yes, I know they are not actually count tables). Finally, I create a table which maps transcripts to genes, this will be used when we generate the expressionset so that we get gene expression levels from transcripts via the R package ‘tximport’.

## The biomart annotations file already exists, loading from it.

2.2 Generate expressionsets

The question is reasonably self-contained. I want to compare the uninfected human samples against any samples which were infected for 4 hours. So let us first pull those samples and then poke at them a bit.

The following block creates an expressionset using all human-quantified samples. As mentioned previously, it uses the table of transcript<->gene mappings, and the biomart annotations.

Given this set of ~440 samples, it then drops the following:

  1. All samples marked ‘skipped’.
  2. All samples which are not from time ‘t4h’.

and resets the condition and batch factors to the ‘infection state’ metadatum and ‘study’, respectively.

## Reading the sample metadata.
## The sample definitions comprises: 437 rows(samples) and 55 columns(metadata fields).
## Reading count tables.
## Using the transcript to gene mapping.
## Reading salmon data with tximport.
## Finished reading count tables.
## Matched 19629 annotations and counts.
## Bringing together the count matrix and gene information.
## The mapped IDs are not the rownames of your gene information, changing them now.
## Some annotations were lost in merging, setting them to 'undefined'.
## There were 267, now there are 247 samples.
## There were 247, now there are 64 samples.
## 
## bead   no stim  yes 
##    3   18   35    8
## 
## lps-timecourse       m-gm-csf           mbio 
##              8             39             17
## Writing the first sheet, containing a legend and some summary data.

## Writing the raw reads.
## Graphing the raw reads.
## Warning in MASS::cov.trob(data[, vars]): Probable convergence failure

## Warning in MASS::cov.trob(data[, vars]): Probable convergence failure

## Warning in MASS::cov.trob(data[, vars]): Probable convergence failure

## Warning in MASS::cov.trob(data[, vars]): Probable convergence failure
## Writing the normalized reads.
## Graphing the normalized reads.

## Writing the median reads by factor.
## The factor bead has 3 rows.
## The factor no has 18 rows.
## The factor stim has 35 rows.
## The factor yes has 8 rows.
## Note: zip::zip() is deprecated, please use zip::zipr() instead

2.3 Examine t4h vs uninfected

Let us perform some generic metrics of the t4h human expressionset. As per usual, I plot the metrics first of the raw data; followed by the same metrics of log2(quantile(cpm(sva(filtered(data))))).

## This function will replace the expt$expressionset slot with:
## log2(svaseq(cpm(quant(cbcb(data)))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Warning in normalize_expt(hs_t4h_expt, norm = "quant", convert = "cpm", :
## Quantile normalization and sva do not always play well together.
## Step 1: performing count filter with option: cbcb
## Removing 7360 low-count genes (12269 remaining).
## Step 2: normalizing the data with quant.
## Using normalize.quantiles.robust due to a thread error in preprocessCore.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 3822 values equal to 0, adding 1 to the matrix.
## Step 5: doing batch correction with svaseq.
## Note to self:  If you get an error like 'x contains missing values' The data has too many 0's and needs a stronger low-count filter applied.
## Passing off to all_adjusters.
## batch_counts: Before batch/surrogate estimation, 711279 entries are x>1: 90.6%.
## batch_counts: Before batch/surrogate estimation, 3822 entries are x==0: 0.487%.
## batch_counts: Before batch/surrogate estimation, 70115 entries are 0<x<1: 8.93%.
## The be method chose 12 surrogate variable(s).
## Attempting svaseq estimation with 12 surrogates.
## There are 1760 (0.224%) elements which are < 0 after batch correction.

2.4 Remove stimulated samples

I perhaps should have removed the stimulated samples sooner, but I was curious to see their effect on the distribution first.

## There were 64, now there are 29 samples.
## This function will replace the expt$expressionset slot with:
## log2(svaseq(cpm(cbcb(data))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Leaving the data unnormalized.  This is necessary for DESeq, but
##  EdgeR/limma might benefit from normalization.  Good choices include quantile,
##  size-factor, tmm, etc.
## Step 1: performing count filter with option: cbcb
## Removing 7759 low-count genes (11870 remaining).
## Step 2: not normalizing the data.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 3276 values equal to 0, adding 1 to the matrix.
## Step 5: doing batch correction with svaseq.
## Note to self:  If you get an error like 'x contains missing values' The data has too many 0's and needs a stronger low-count filter applied.
## Passing off to all_adjusters.
## batch_counts: Before batch/surrogate estimation, 319013 entries are x>1: 92.7%.
## batch_counts: Before batch/surrogate estimation, 3276 entries are x==0: 0.952%.
## batch_counts: Before batch/surrogate estimation, 21941 entries are 0<x<1: 6.37%.
## The be method chose 6 surrogate variable(s).
## Attempting svaseq estimation with 6 surrogates.
## There are 557 (0.162%) elements which are < 0 after batch correction.

## batch_counts: Before batch/surrogate estimation, 339179 entries are x>1: 98.5%.
## batch_counts: Before batch/surrogate estimation, 3276 entries are x==0: 0.952%.
## batch_counts: Before batch/surrogate estimation, 216 entries are 0<x<1: 0.0627%.
## The be method chose 5 surrogate variable(s).
## Attempting svaseq estimation with 5 surrogates.
## This function will replace the expt$expressionset slot with:
## log2(cpm(quant(cbcb(data))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Not correcting the count-data for batch effects.  If batch is
##  included in EdgerR/limma's model, then this is probably wise; but in extreme
##  batch effects this is a good parameter to play with.
## Step 1: performing count filter with option: cbcb
## Removing 0 low-count genes (11870 remaining).
## Step 2: normalizing the data with quant.
## Using normalize.quantiles.robust due to a thread error in preprocessCore.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 1964 values equal to 0, adding 1 to the matrix.
## Step 5: not doing batch correction.
## Plotting a PCA before surrogates/batch inclusion.
## Using svaseq to visualize before/after batch inclusion.
## Performing a test normalization with: raw
## This function will replace the expt$expressionset slot with:
## log2(svaseq(cpm(cbcb(data))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Leaving the data unnormalized.  This is necessary for DESeq, but
##  EdgeR/limma might benefit from normalization.  Good choices include quantile,
##  size-factor, tmm, etc.
## Step 1: performing count filter with option: cbcb
## Removing 0 low-count genes (11870 remaining).
## Step 2: not normalizing the data.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 3276 values equal to 0, adding 1 to the matrix.
## Step 5: doing batch correction with svaseq.
## Note to self:  If you get an error like 'x contains missing values' The data has too many 0's and needs a stronger low-count filter applied.
## Passing off to all_adjusters.
## batch_counts: Before batch/surrogate estimation, 319013 entries are x>1: 92.7%.
## batch_counts: Before batch/surrogate estimation, 3276 entries are x==0: 0.952%.
## batch_counts: Before batch/surrogate estimation, 21941 entries are 0<x<1: 6.37%.
## The be method chose 6 surrogate variable(s).
## Attempting svaseq estimation with 6 surrogates.
## There are 557 (0.162%) elements which are < 0 after batch correction.
## Finished running DE analyses, collecting outputs.
## Comparing analyses.

## Deleting the file excel/HsM0Lm4h_de_tables.xlsx before writing the tables.
## Writing a legend of columns.
## Printing a pca plot before/after surrogates/batch estimation.
## Working on 1/1: infection which is: yes/no.
## Found table with yes_vs_no
## 20181210 a pthread error in normalize.quantiles leads me to robust.
## Used Bon Ferroni corrected t test(s) between columns.
## Used Bon Ferroni corrected t test(s) between columns.
## Used Bon Ferroni corrected t test(s) between columns.
## Adding venn plots for infection.

## Limma expression coefficients for infection; R^2: 0.92; equation: y = 0.996x - 0.0675
## Edger expression coefficients for infection; R^2: 0.92; equation: y = 1.08x - 0.728
## DESeq2 expression coefficients for infection; R^2: 0.92; equation: y = 1.08x - 0.701
## Writing summary information.
## Attempting to add the comparison plot to pairwise_summary at row: 23 and column: 1
## Performing save of the workbook.

## Writing a legend of columns.
## The count is: 1 and the test is: limma.
## The count is: 2 and the test is: edger.
## The count is: 3 and the test is: deseq.
## The count is: 4 and the test is: ebseq.
## The count is: 5 and the test is: basic.
## Writing excel data according to limma for infection: 1/5.
## After (adj)p filter, the up genes table has 2976 genes.
## After (adj)p filter, the down genes table has 4109 genes.
## After fold change filter, the up genes table has 1081 genes.
## After fold change filter, the down genes table has 871 genes.
## Printing significant genes to the file: excel/HsM0Lm4h_sig_tables.xlsx
## 1/1: Creating significant table up_1limma_infection
## Writing excel data according to edger for infection: 1/5.
## After (adj)p filter, the up genes table has 3428 genes.
## After (adj)p filter, the down genes table has 3346 genes.
## After fold change filter, the up genes table has 1243 genes.
## After fold change filter, the down genes table has 1029 genes.
## Printing significant genes to the file: excel/HsM0Lm4h_sig_tables.xlsx
## 1/1: Creating significant table up_1edger_infection
## Writing excel data according to deseq for infection: 1/5.
## After (adj)p filter, the up genes table has 3600 genes.
## After (adj)p filter, the down genes table has 3712 genes.
## After fold change filter, the up genes table has 1226 genes.
## After fold change filter, the down genes table has 1056 genes.
## Printing significant genes to the file: excel/HsM0Lm4h_sig_tables.xlsx
## 1/1: Creating significant table up_1deseq_infection
## Writing excel data according to ebseq for infection: 1/5.
## After (adj)p filter, the up genes table has 1893 genes.
## After (adj)p filter, the down genes table has 1940 genes.
## After fold change filter, the up genes table has 895 genes.
## After fold change filter, the down genes table has 1031 genes.
## Printing significant genes to the file: excel/HsM0Lm4h_sig_tables.xlsx
## 1/1: Creating significant table up_1ebseq_infection
## Writing excel data according to basic for infection: 1/5.
## After (adj)p filter, the up genes table has 1335 genes.
## After (adj)p filter, the down genes table has 2590 genes.
## After fold change filter, the up genes table has 965 genes.
## After fold change filter, the down genes table has 1441 genes.
## Printing significant genes to the file: excel/HsM0Lm4h_sig_tables.xlsx
## 1/1: Creating significant table up_1basic_infection
## Adding significance bar plots.

3 Which genes are DE in mouse macrophages at 4 hours upon infection with L. major?

Most of this should be the same in process as what was performed for the human.

3.1 Gather annotation data

I want to perform a series of comparisons among the host cells: human and mouse. Thus I need to collect annotation data for both species and get the set of orthologs between them.

3.2 Generate expressionsets

The question is reasonably self-contained. I want to compare the uninfected human samples against any samples which were infected for 4 hours. So let us first pull those samples and then poke at them a bit.

## Reading the sample metadata.
## The sample definitions comprises: 437 rows(samples) and 55 columns(metadata fields).
## Reading count tables.
## Using the transcript to gene mapping.
## Reading salmon data with tximport.
## Finished reading count tables.
## Matched 19660 annotations and counts.
## Bringing together the count matrix and gene information.
## The mapped IDs are not the rownames of your gene information, changing them now.
## Some annotations were lost in merging, setting them to 'undefined'.
## There were 105, now there are 41 samples.
## 
##   no stim  yes 
##   11   24    6
## 
## undefined 
##        41
## Writing the first sheet, containing a legend and some summary data.

## Writing the raw reads.
## Graphing the raw reads.
## Writing the normalized reads.
## Graphing the normalized reads.

## Writing the median reads by factor.
## The factor no has 11 rows.
## The factor stim has 24 rows.
## The factor yes has 6 rows.

3.3 Examine t4h vs uninfected

## This function will replace the expt$expressionset slot with:
## log2(svaseq(cpm(quant(cbcb(data)))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Warning in normalize_expt(mm_t4h_expt, norm = "quant", convert = "cpm", :
## Quantile normalization and sva do not always play well together.
## Step 1: performing count filter with option: cbcb
## Removing 9350 low-count genes (10310 remaining).
## Step 2: normalizing the data with quant.
## Using normalize.quantiles.robust due to a thread error in preprocessCore.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 38 values equal to 0, adding 1 to the matrix.
## Step 5: doing batch correction with svaseq.
## Note to self:  If you get an error like 'x contains missing values' The data has too many 0's and needs a stronger low-count filter applied.
## Passing off to all_adjusters.
## batch_counts: Before batch/surrogate estimation, 390888 entries are x>1: 92.5%.
## batch_counts: Before batch/surrogate estimation, 38 entries are x==0: 0.00899%.
## batch_counts: Before batch/surrogate estimation, 31784 entries are 0<x<1: 7.52%.
## The be method chose 8 surrogate variable(s).
## Attempting svaseq estimation with 8 surrogates.
## There are 807 (0.191%) elements which are < 0 after batch correction.

3.4 Perform de analyses

## This function will replace the expt$expressionset slot with:
## log2(svaseq(cpm(cbcb(data))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Leaving the data unnormalized.  This is necessary for DESeq, but
##  EdgeR/limma might benefit from normalization.  Good choices include quantile,
##  size-factor, tmm, etc.
## Step 1: performing count filter with option: cbcb
## Removing 9350 low-count genes (10310 remaining).
## Step 2: not normalizing the data.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 3253 values equal to 0, adding 1 to the matrix.
## Step 5: doing batch correction with svaseq.
## Note to self:  If you get an error like 'x contains missing values' The data has too many 0's and needs a stronger low-count filter applied.
## Passing off to all_adjusters.
## batch_counts: Before batch/surrogate estimation, 385615 entries are x>1: 91.2%.
## batch_counts: Before batch/surrogate estimation, 3253 entries are x==0: 0.770%.
## batch_counts: Before batch/surrogate estimation, 33842 entries are 0<x<1: 8.01%.
## The be method chose 7 surrogate variable(s).
## Attempting svaseq estimation with 7 surrogates.
## There are 1186 (0.281%) elements which are < 0 after batch correction.

## batch_counts: Before batch/surrogate estimation, 417208 entries are x>1: 98.7%.
## batch_counts: Before batch/surrogate estimation, 3253 entries are x==0: 0.770%.
## batch_counts: Before batch/surrogate estimation, 392 entries are 0<x<1: 0.0927%.
## The be method chose 5 surrogate variable(s).
## Attempting svaseq estimation with 5 surrogates.
## This function will replace the expt$expressionset slot with:
## log2(cpm(quant(cbcb(data))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Not correcting the count-data for batch effects.  If batch is
##  included in EdgerR/limma's model, then this is probably wise; but in extreme
##  batch effects this is a good parameter to play with.
## Step 1: performing count filter with option: cbcb
## Removing 0 low-count genes (10310 remaining).
## Step 2: normalizing the data with quant.
## Using normalize.quantiles.robust due to a thread error in preprocessCore.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 38 values equal to 0, adding 1 to the matrix.
## Step 5: not doing batch correction.
## Plotting a PCA before surrogates/batch inclusion.
## Using svaseq to visualize before/after batch inclusion.
## Performing a test normalization with: raw
## This function will replace the expt$expressionset slot with:
## log2(svaseq(cpm(cbcb(data))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Leaving the data unnormalized.  This is necessary for DESeq, but
##  EdgeR/limma might benefit from normalization.  Good choices include quantile,
##  size-factor, tmm, etc.
## Step 1: performing count filter with option: cbcb
## Removing 0 low-count genes (10310 remaining).
## Step 2: not normalizing the data.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 3253 values equal to 0, adding 1 to the matrix.
## Step 5: doing batch correction with svaseq.
## Note to self:  If you get an error like 'x contains missing values' The data has too many 0's and needs a stronger low-count filter applied.
## Passing off to all_adjusters.
## batch_counts: Before batch/surrogate estimation, 385615 entries are x>1: 91.2%.
## batch_counts: Before batch/surrogate estimation, 3253 entries are x==0: 0.770%.
## batch_counts: Before batch/surrogate estimation, 33842 entries are 0<x<1: 8.01%.
## The be method chose 7 surrogate variable(s).
## Attempting svaseq estimation with 7 surrogates.
## There are 1186 (0.281%) elements which are < 0 after batch correction.
## Finished running DE analyses, collecting outputs.
## Comparing analyses.

## Deleting the file excel/MmM0Lm4h_de_tables.xlsx before writing the tables.
## Writing a legend of columns.
## Printing a pca plot before/after surrogates/batch estimation.
## Working on 1/1: infection which is: yes/no.
## Found table with yes_vs_no
## 20181210 a pthread error in normalize.quantiles leads me to robust.
## Used Bon Ferroni corrected t test(s) between columns.
## Used Bon Ferroni corrected t test(s) between columns.
## Used Bon Ferroni corrected t test(s) between columns.
## Adding venn plots for infection.

## Limma expression coefficients for infection; R^2: 0.92; equation: y = 0.95x + 0.187
## Edger expression coefficients for infection; R^2: 0.91; equation: y = 1.01x - 0.0739
## DESeq2 expression coefficients for infection; R^2: 0.909; equation: y = 1.01x - 0.0692
## Writing summary information.
## Attempting to add the comparison plot to pairwise_summary at row: 23 and column: 1
## Performing save of the workbook.

## Writing a legend of columns.
## The count is: 1 and the test is: limma.
## The count is: 2 and the test is: edger.
## The count is: 3 and the test is: deseq.
## The count is: 4 and the test is: ebseq.
## The count is: 5 and the test is: basic.
## Writing excel data according to limma for infection: 1/5.
## After (adj)p filter, the up genes table has 2234 genes.
## After (adj)p filter, the down genes table has 2524 genes.
## After fold change filter, the up genes table has 825 genes.
## After fold change filter, the down genes table has 895 genes.
## Printing significant genes to the file: excel/MmM0Lm4h_sig_tables.xlsx
## 1/1: Creating significant table up_1limma_infection
## Writing excel data according to edger for infection: 1/5.
## After (adj)p filter, the up genes table has 2153 genes.
## After (adj)p filter, the down genes table has 1967 genes.
## After fold change filter, the up genes table has 924 genes.
## After fold change filter, the down genes table has 911 genes.
## Printing significant genes to the file: excel/MmM0Lm4h_sig_tables.xlsx
## 1/1: Creating significant table up_1edger_infection
## Writing excel data according to deseq for infection: 1/5.
## After (adj)p filter, the up genes table has 2474 genes.
## After (adj)p filter, the down genes table has 2404 genes.
## After fold change filter, the up genes table has 929 genes.
## After fold change filter, the down genes table has 932 genes.
## Printing significant genes to the file: excel/MmM0Lm4h_sig_tables.xlsx
## 1/1: Creating significant table up_1deseq_infection
## Writing excel data according to ebseq for infection: 1/5.
## After (adj)p filter, the up genes table has 1149 genes.
## After (adj)p filter, the down genes table has 1310 genes.
## After fold change filter, the up genes table has 644 genes.
## After fold change filter, the down genes table has 710 genes.
## Printing significant genes to the file: excel/MmM0Lm4h_sig_tables.xlsx
## 1/1: Creating significant table up_1ebseq_infection
## Writing excel data according to basic for infection: 1/5.
## After (adj)p filter, the up genes table has 2 genes.
## After (adj)p filter, the down genes table has 3 genes.
## After fold change filter, the up genes table has 2 genes.
## After fold change filter, the down genes table has 3 genes.
## Printing significant genes to the file: excel/MmM0Lm4h_sig_tables.xlsx
## 1/1: Creating significant table up_1basic_infection
## Adding significance bar plots.

3.5 Compare this to the previous result.

Let us see if our human differential expression result is similar to that obtained in Table S2.

## 
##  Pearson's product-moment correlation
## 
## data:  merged[["limma_logfc"]] and merged[["Fold change"]]
## t = 127, df = 5082, p-value <2e-16
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
##  0.8645 0.8778
## sample estimates:
##    cor 
## 0.8713

3.6 Compare the previous mouse and these mouse results.

## 
##  Pearson's product-moment correlation
## 
## data:  merged[["limma_logfc"]] and merged[["Fold change"]]
## t = 223, df = 5718, p-value <2e-16
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
##  0.9445 0.9498
## sample estimates:
##    cor 
## 0.9472

4 What genes are shared in the mouse and human data?

This is one method of addressing Najibs big question #1c. In this method, I am taking the tables for the human analysis and mouse analysis separately, then merging them using a table of orthologs.

Side note: a different way of addressing this question resides in 20190220_host_comparisons.Rmd. In this competing method, the table of orthologs is used in the beginning to make a single set of IDs for the human and mouse genes, then perform the differential expression analysis.

4.1 Extract human mouse orthologs

My load_biomart_orthologs() function should provide this mapping gene ID table.

## Unable to perform useMart, perhaps the host/mart is incorrect: dec2016.archive.ensembl.org ENSEMBL_MART_ENSEMBL.
## The available first_marts are:
## ENSEMBL_MART_ENSEMBLENSEMBL_MART_MOUSEENSEMBL_MART_SNPENSEMBL_MART_FUNCGENENSEMBL_MART_VEGA
## Trying the first one.
## 
##  Pearson's product-moment correlation
## 
## data:  both_table_hs[["limma_logfc.x"]] and both_table_hs[["limma_logfc.y"]]
## t = 34, df = 13699, p-value <2e-16
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
##  0.2656 0.2964
## sample estimates:
##    cor 
## 0.2811
## 
##  Pearson's product-moment correlation
## 
## data:  both_table_mm[["limma_logfc.x"]] and both_table_mm[["limma_logfc.y"]]
## t = 34, df = 21859, p-value <2e-16
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
##  0.2135 0.2387
## sample estimates:
##    cor 
## 0.2261

I believe these both_table_hs and both_table_mm tables are good candidates for the set of genes which are shared across the human and mouse samples, from the perspective of the human.

Separately

Now lets write some of these out.

## Saving to: excel/HsM0Lm4h_vs_MmM0Lm4h_shared_up_hs.xlsx
## Saving to: excel/HsM0Lm4h_vs_MmM0Lm4h_shared_up_mm.xlsx
## Saving to: excel/HsM0Lm4h_vs_MmM0Lm4h_shared_down_hs.xlsx
## Saving to: excel/HsM0Lm4h_vs_MmM0Lm4h_shared_down_mm.xlsx

5 Next question: Which 4 hour genes are shared with CIDEIM?

“Which 4 hour DE genes are shared with the CIDEIM panamensis infected human macrophages?”

So, once again there are two ways of approaching this question:

  1. Examine the question of infected vs. uninfected for the two data sets separately. Then query the results and see what comes out.
  2. Examine this as a single question using the union of both data sets.

Since I already have a putative answer for the human 4 hour macrophage data, The simplest method is to just look at the cideim data and do #1 above. So let us do that first.

## There were 267, now there are 23 samples.
## This function will replace the expt$expressionset slot with:
## log2(svaseq(cpm(quant(simple(data)))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Warning in normalize_expt(cideim_macr, transform = "log2", convert =
## "cpm", : Quantile normalization and sva do not always play well together.
## Step 1: performing count filter with option: simple
## Removing 2127 low-count genes (17502 remaining).
## Step 2: normalizing the data with quant.
## Using normalize.quantiles.robust due to a thread error in preprocessCore.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 33659 values equal to 0, adding 1 to the matrix.
## Step 5: doing batch correction with svaseq.
## Note to self:  If you get an error like 'x contains missing values' The data has too many 0's and needs a stronger low-count filter applied.
## Passing off to all_adjusters.
## batch_counts: Before batch/surrogate estimation, 278638 entries are x>1: 69.2%.
## batch_counts: Before batch/surrogate estimation, 33659 entries are x==0: 8.36%.
## batch_counts: Before batch/surrogate estimation, 90249 entries are 0<x<1: 22.4%.
## The be method chose 5 surrogate variable(s).
## Attempting svaseq estimation with 5 surrogates.
## There are 13850 (3.44%) elements which are < 0 after batch correction.

## This function will replace the expt$expressionset slot with:
## log2(cpm(quant(cbcb(data))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Not correcting the count-data for batch effects.  If batch is
##  included in EdgerR/limma's model, then this is probably wise; but in extreme
##  batch effects this is a good parameter to play with.
## Step 1: performing count filter with option: cbcb
## Removing 0 low-count genes (12499 remaining).
## Step 2: normalizing the data with quant.
## Using normalize.quantiles.robust due to a thread error in preprocessCore.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 547 values equal to 0, adding 1 to the matrix.
## Step 5: not doing batch correction.
## Plotting a PCA before surrogates/batch inclusion.
## Using limma's removeBatchEffect to visualize with(out) batch inclusion.
## Starting basic_pairwise().
## Starting basic pairwise comparison.
## Leaving the data alone, regardless of normalization state.
## Basic step 0/3: Transforming data.
## Basic step 1/3: Creating median and variance tables.
## Basic step 2/3: Performing 3 comparisons.
## Basic step 3/3: Creating faux DE Tables.
## Basic: Returning tables.
## Starting deseq_pairwise().
## Starting DESeq2 pairwise comparisons.
## About to round the data, this is a pretty terrible thing to do. But if you, like me, want to see what happens when you put non-standard data into deseq, then here you go.
## Warning in choose_binom_dataset(input, force = force): This data was
## inappropriately forced into integers.
## The condition+batch model failed. Does your experimental design support both condition and batch? Using only a conditional model.
## Choosing the non-intercept containing model.
## DESeq2 step 1/5: Including batch and condition in the deseq model.
## converting counts to integer mode
## DESeq2 step 2/5: Estimate size factors.
## DESeq2 step 3/5: Estimate dispersions.
## gene-wise dispersion estimates
## mean-dispersion relationship
## final dispersion estimates
## Using a parametric fitting seems to have worked.
## DESeq2 step 4/5: nbinomWaldTest.
## Starting ebseq_pairwise().
## The data should be suitable for EdgeR/DESeq/EBSeq. If they freak out, check the state of the count table and ensure that it is in integer counts.
## Starting EBSeq pairwise subset.
## Choosing the non-intercept containing model.
## Starting EBTest of no vs. yes.
## Copying ppee values as ajusted p-values until I figure out how to deal with them.
## Starting edger_pairwise().
## Starting edgeR pairwise comparisons.
## About to round the data, this is a pretty terrible thing to do. But if you, like me, want to see what happens when you put non-standard data into deseq, then here you go.
## Warning in choose_binom_dataset(input, force = force): This data was
## inappropriately forced into integers.
## The condition+batch model failed. Does your experimental design support both condition and batch? Using only a conditional model.
## Choosing the non-intercept containing model.
## EdgeR step 1/9: Importing and normalizing data.
## EdgeR step 2/9: Estimating the common dispersion.
## EdgeR step 3/9: Estimating dispersion across genes.
## EdgeR step 4/9: Estimating GLM Common dispersion.
## EdgeR step 5/9: Estimating GLM Trended dispersion.
## EdgeR step 6/9: Estimating GLM Tagged dispersion.
## EdgeR step 7/9: Running glmFit, switch to glmQLFit by changing the argument 'edger_test'.
## EdgeR step 8/9: Making pairwise contrasts.

## Starting limma_pairwise().
## Starting limma pairwise comparison.
## Leaving the data alone, regardless of normalization state.
## libsize was not specified, this parameter has profound effects on limma's result.
## Using the libsize from expt$best_libsize.
## Limma step 1/6: choosing model.
## The condition+batch model failed. Does your experimental design support both condition and batch? Using only a conditional model.
## Choosing the non-intercept containing model.
## Limma step 2/6: running limma::voom(), switch with the argument 'which_voom'.
## Using normalize.method=quantile for voom.

## Limma step 3/6: running lmFit with method: ls.
## Limma step 4/6: making and fitting contrasts with no intercept. (~ 0 + factors)
## Limma step 5/6: Running eBayes with robust=FALSE and trend=FALSE.
## Limma step 6/6: Writing limma outputs.
## Limma step 6/6: 1/1: Creating table: yes_vs_no.  Adjust=BH
## Limma step 6/6: 1/2: Creating table: no.  Adjust=BH
## Limma step 6/6: 2/2: Creating table: yes.  Adjust=BH
## Comparing analyses.

## Writing a legend of columns.
## Working on 1/1: infection which is: yes/no.
## Found table with yes_vs_no
## 20181210 a pthread error in normalize.quantiles leads me to robust.
## Used Bon Ferroni corrected t test(s) between columns.
## Used Bon Ferroni corrected t test(s) between columns.
## Used Bon Ferroni corrected t test(s) between columns.
## 
##  Pearson's product-moment correlation
## 
## data:  cideim_merged[["deseq_logfc.x"]] and cideim_merged[["deseq_logfc.y"]]
## t = 31, df = 11570, p-value <2e-16
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
##  0.2574 0.2911
## sample estimates:
##    cor 
## 0.2743

5.1 Try again as one big expressionset

## There were 267, now there are 247 samples.
## There were 247, now there are 64 samples.
## There were 267, now there are 87 samples.
## This function will replace the expt$expressionset slot with:
## log2(cpm(quant(cbcb(data))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Not correcting the count-data for batch effects.  If batch is
##  included in EdgerR/limma's model, then this is probably wise; but in extreme
##  batch effects this is a good parameter to play with.
## Step 1: performing count filter with option: cbcb
## Removing 6648 low-count genes (12981 remaining).
## Step 2: normalizing the data with quant.
## Using normalize.quantiles.robust due to a thread error in preprocessCore.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 1937 values equal to 0, adding 1 to the matrix.
## Step 5: not doing batch correction.

## This function will replace the expt$expressionset slot with:
## log2(svaseq(cpm(quant(cbcb(data)))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Warning in normalize_expt(cideim_t4h, filter = TRUE, norm = "quant",
## convert = "cpm", : Quantile normalization and sva do not always play well
## together.
## Step 1: performing count filter with option: cbcb
## Removing 6648 low-count genes (12981 remaining).
## Step 2: normalizing the data with quant.
## Using normalize.quantiles.robust due to a thread error in preprocessCore.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 1937 values equal to 0, adding 1 to the matrix.
## Step 5: doing batch correction with svaseq.
## Note to self:  If you get an error like 'x contains missing values' The data has too many 0's and needs a stronger low-count filter applied.
## Passing off to all_adjusters.
## batch_counts: Before batch/surrogate estimation, 997436 entries are x>1: 88.3%.
## batch_counts: Before batch/surrogate estimation, 1937 entries are x==0: 0.172%.
## batch_counts: Before batch/surrogate estimation, 129974 entries are 0<x<1: 11.5%.
## The be method chose 14 surrogate variable(s).
## Attempting svaseq estimation with 14 surrogates.
## There are 3170 (0.281%) elements which are < 0 after batch correction.

## batch_counts: Before batch/surrogate estimation, 1089986 entries are x>1: 96.5%.
## batch_counts: Before batch/surrogate estimation, 25706 entries are x==0: 2.28%.
## batch_counts: Before batch/surrogate estimation, 1303 entries are 0<x<1: 0.115%.
## The be method chose 10 surrogate variable(s).
## Attempting svaseq estimation with 10 surrogates.
## This function will replace the expt$expressionset slot with:
## log2(cpm(quant(cbcb(data))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Not correcting the count-data for batch effects.  If batch is
##  included in EdgerR/limma's model, then this is probably wise; but in extreme
##  batch effects this is a good parameter to play with.
## Step 1: performing count filter with option: cbcb
## Removing 0 low-count genes (12981 remaining).
## Step 2: normalizing the data with quant.
## Using normalize.quantiles.robust due to a thread error in preprocessCore.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 1937 values equal to 0, adding 1 to the matrix.
## Step 5: not doing batch correction.
## Plotting a PCA before surrogates/batch inclusion.
## Using svaseq to visualize before/after batch inclusion.
## Performing a test normalization with: raw
## This function will replace the expt$expressionset slot with:
## log2(svaseq(cpm(cbcb(data))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Leaving the data unnormalized.  This is necessary for DESeq, but
##  EdgeR/limma might benefit from normalization.  Good choices include quantile,
##  size-factor, tmm, etc.
## Step 1: performing count filter with option: cbcb
## Removing 0 low-count genes (12981 remaining).
## Step 2: not normalizing the data.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 25706 values equal to 0, adding 1 to the matrix.
## Step 5: doing batch correction with svaseq.
## Note to self:  If you get an error like 'x contains missing values' The data has too many 0's and needs a stronger low-count filter applied.
## Passing off to all_adjusters.
## batch_counts: Before batch/surrogate estimation, 988589 entries are x>1: 87.5%.
## batch_counts: Before batch/surrogate estimation, 25706 entries are x==0: 2.28%.
## batch_counts: Before batch/surrogate estimation, 115052 entries are 0<x<1: 10.2%.
## The be method chose 11 surrogate variable(s).
## Attempting svaseq estimation with 11 surrogates.
## There are 5446 (0.482%) elements which are < 0 after batch correction.
## Starting basic_pairwise().
## Starting basic pairwise comparison.
## Leaving the data alone, regardless of normalization state.
## Basic step 0/3: Transforming data.
## Basic step 1/3: Creating median and variance tables.
## Basic step 2/3: Performing 10 comparisons.
## Basic step 3/3: Creating faux DE Tables.
## Basic: Returning tables.
## Starting deseq_pairwise().
## Starting DESeq2 pairwise comparisons.
## About to round the data, this is a pretty terrible thing to do. But if you, like me, want to see what happens when you put non-standard data into deseq, then here you go.
## Warning in choose_binom_dataset(input, force = force): This data was
## inappropriately forced into integers.
## Including batch estimates from sva/ruv/pca in the model.
## Choosing the non-intercept containing model.
## DESeq2 step 1/5: Including a matrix of batch estimates in the deseq model.
## converting counts to integer mode
## DESeq2 step 2/5: Estimate size factors.
## DESeq2 step 3/5: Estimate dispersions.
## gene-wise dispersion estimates
## mean-dispersion relationship
## final dispersion estimates
## Using a parametric fitting seems to have worked.
## DESeq2 step 4/5: nbinomWaldTest.
## Starting ebseq_pairwise().
## The data should be suitable for EdgeR/DESeq/EBSeq. If they freak out, check the state of the count table and ensure that it is in integer counts.
## Starting EBSeq pairwise subset.
## Choosing the non-intercept containing model.
## Starting EBTest of bead vs. no.
## Copying ppee values as ajusted p-values until I figure out how to deal with them.
## Starting EBTest of bead vs. stim.
## Copying ppee values as ajusted p-values until I figure out how to deal with them.
## Starting EBTest of bead vs. yes.
## Copying ppee values as ajusted p-values until I figure out how to deal with them.
## Starting EBTest of no vs. stim.
## Copying ppee values as ajusted p-values until I figure out how to deal with them.
## Starting EBTest of no vs. yes.
## Copying ppee values as ajusted p-values until I figure out how to deal with them.
## Starting EBTest of stim vs. yes.
## Copying ppee values as ajusted p-values until I figure out how to deal with them.
## Starting edger_pairwise().
## Starting edgeR pairwise comparisons.
## About to round the data, this is a pretty terrible thing to do. But if you, like me, want to see what happens when you put non-standard data into deseq, then here you go.
## Warning in choose_binom_dataset(input, force = force): This data was
## inappropriately forced into integers.
## Including batch estimates from sva/ruv/pca in the model.
## Choosing the non-intercept containing model.
## EdgeR step 1/9: Importing and normalizing data.
## EdgeR step 2/9: Estimating the common dispersion.
## EdgeR step 3/9: Estimating dispersion across genes.
## EdgeR step 4/9: Estimating GLM Common dispersion.
## EdgeR step 5/9: Estimating GLM Trended dispersion.
## EdgeR step 6/9: Estimating GLM Tagged dispersion.
## EdgeR step 7/9: Running glmFit, switch to glmQLFit by changing the argument 'edger_test'.
## EdgeR step 8/9: Making pairwise contrasts.

## Starting limma_pairwise().
## Starting limma pairwise comparison.
## Leaving the data alone, regardless of normalization state.
## libsize was not specified, this parameter has profound effects on limma's result.
## Using the libsize from expt$best_libsize.
## Limma step 1/6: choosing model.
## Including batch estimates from sva/ruv/pca in the model.
## Choosing the non-intercept containing model.
## Limma step 2/6: running limma::voom(), switch with the argument 'which_voom'.
## Using normalize.method=quantile for voom.

## Limma step 3/6: running lmFit with method: ls.
## Limma step 4/6: making and fitting contrasts with no intercept. (~ 0 + factors)
## Limma step 5/6: Running eBayes with robust=FALSE and trend=FALSE.
## Limma step 6/6: Writing limma outputs.
## Limma step 6/6: 1/6: Creating table: no_vs_bead.  Adjust=BH
## Limma step 6/6: 2/6: Creating table: stim_vs_bead.  Adjust=BH
## Limma step 6/6: 3/6: Creating table: yes_vs_bead.  Adjust=BH
## Limma step 6/6: 4/6: Creating table: stim_vs_no.  Adjust=BH
## Limma step 6/6: 5/6: Creating table: yes_vs_no.  Adjust=BH
## Limma step 6/6: 6/6: Creating table: yes_vs_stim.  Adjust=BH
## Limma step 6/6: 1/4: Creating table: bead.  Adjust=BH
## Limma step 6/6: 2/4: Creating table: no.  Adjust=BH
## Limma step 6/6: 3/4: Creating table: stim.  Adjust=BH
## Limma step 6/6: 4/4: Creating table: yes.  Adjust=BH
## Comparing analyses.

## Writing a legend of columns.
## Printing a pca plot before/after surrogates/batch estimation.
## Working on 1/1: infection which is: yes/no.
## Found table with yes_vs_no
## 20181210 a pthread error in normalize.quantiles leads me to robust.
## Used Bon Ferroni corrected t test(s) between columns.
## Used Bon Ferroni corrected t test(s) between columns.
## Used Bon Ferroni corrected t test(s) between columns.
## Adding venn plots for infection.

## Limma expression coefficients for infection; R^2: 0.977; equation: y = 0.983x + 0.0397
## Edger expression coefficients for infection; R^2: 0.975; equation: y = 0.981x + 0.154
## DESeq2 expression coefficients for infection; R^2: 0.975; equation: y = 0.981x + 0.173
## Writing summary information.
## Attempting to add the comparison plot to pairwise_summary at row: 23 and column: 1
## Performing save of the workbook.

## Not adding plots, limma had an error.
## Not adding plots, deseq had an error.
## Not adding plots, edger had an error.
## Not adding plots, basic had an error.
## Writing a legend of columns.
## Error in combine_de_tables(cideim_t4h_tables, excel = "excel/HsM0Lm4h_vs_HsM0Lp_sig_tables.xlsx"): None of the DE tools appear to have worked.

6 Next, overlap between macrophages and neutrophils

All of the neutrophil data is in mouse, apparently. This will make it more difficult, perhaps impossible to get an accurate answer.

So instead, look at infection vs. uninfected in mouse and then compare to the earliest Sacks’ timepoints in neutrophils.

## There were 105, now there are 80 samples.
## There were 80, now there are 56 samples.
## This function will replace the expt$expressionset slot with:
## cpm(quant(cbcb(data)))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Leaving the data in its current base format, keep in mind that
##  some metrics are easier to see when the data is log2 transformed, but
##  EdgeR/DESeq do not accept transformed data.
## Not correcting the count-data for batch effects.  If batch is
##  included in EdgerR/limma's model, then this is probably wise; but in extreme
##  batch effects this is a good parameter to play with.
## Step 1: performing count filter with option: cbcb
## Removing 0 low-count genes (19660 remaining).
## Step 2: normalizing the data with quant.
## Using normalize.quantiles.robust due to a thread error in preprocessCore.
## Step 3: converting the data with cpm.
## Step 4: not transforming the data.
## Step 5: not doing batch correction.

## This function will replace the expt$expressionset slot with:
## svaseq(cpm(quant(cbcb(data))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Leaving the data in its current base format, keep in mind that
##  some metrics are easier to see when the data is log2 transformed, but
##  EdgeR/DESeq do not accept transformed data.
## Warning in normalize_expt(neut_macr_mus, convert = "cpm", norm = "quant", :
## Quantile normalization and sva do not always play well together.
## Step 1: performing count filter with option: cbcb
## Removing 0 low-count genes (19660 remaining).
## Step 2: normalizing the data with quant.
## Using normalize.quantiles.robust due to a thread error in preprocessCore.
## Step 3: converting the data with cpm.
## Step 4: not transforming the data.
## Step 5: doing batch correction with svaseq.
## Note to self:  If you get an error like 'x contains missing values' The data has too many 0's and needs a stronger low-count filter applied.
## Passing off to all_adjusters.
## batch_counts: Before batch/surrogate estimation, 515332 entries are x>1: 46.8%.
## batch_counts: Before batch/surrogate estimation, 323375 entries are x==0: 29.4%.
## batch_counts: Before batch/surrogate estimation, 262253 entries are 0<x<1: 23.8%.
## The be method chose 10 surrogate variable(s).
## Attempting svaseq estimation with 10 surrogates.
## There are 30709 (2.79%) elements which are < 0 after batch correction.

## This function will replace the expt$expressionset slot with:
## simple(data)
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Leaving the data in its current base format, keep in mind that
##  some metrics are easier to see when the data is log2 transformed, but
##  EdgeR/DESeq do not accept transformed data.
## Leaving the data unconverted.  It is often advisable to cpm/rpkm
##  the data to normalize for sampling differences, keep in mind though that rpkm
##  has some annoying biases, and voom() by default does a cpm (though hpgl_voom()
##  will try to detect this).
## Leaving the data unnormalized.  This is necessary for DESeq, but
##  EdgeR/limma might benefit from normalization.  Good choices include quantile,
##  size-factor, tmm, etc.
## Not correcting the count-data for batch effects.  If batch is
##  included in EdgerR/limma's model, then this is probably wise; but in extreme
##  batch effects this is a good parameter to play with.
## Step 1: performing count filter with option: simple
## Removing 3175 low-count genes (16485 remaining).
## Step 2: not normalizing the data.
## Step 3: not converting the data.
## Step 4: not transforming the data.
## Step 5: not doing batch correction.
## batch_counts: Before batch/surrogate estimation, 616059 entries are x>1: 66.7%.
## batch_counts: Before batch/surrogate estimation, 269846 entries are x==0: 29.2%.
## batch_counts: Before batch/surrogate estimation, 2347 entries are 0<x<1: 0.254%.
## The be method chose 3 surrogate variable(s).
## Attempting svaseq estimation with 3 surrogates.
## This function will replace the expt$expressionset slot with:
## log2(cpm(quant(cbcb(data))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Not correcting the count-data for batch effects.  If batch is
##  included in EdgerR/limma's model, then this is probably wise; but in extreme
##  batch effects this is a good parameter to play with.
## Step 1: performing count filter with option: cbcb
## Removing 0 low-count genes (16485 remaining).
## Step 2: normalizing the data with quant.
## Using normalize.quantiles.robust due to a thread error in preprocessCore.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 117451 values equal to 0, adding 1 to the matrix.
## Step 5: not doing batch correction.
## Plotting a PCA before surrogates/batch inclusion.
## Using svaseq to visualize before/after batch inclusion.
## Performing a test normalization with: raw
## This function will replace the expt$expressionset slot with:
## log2(svaseq(cpm(cbcb(data))))
## It backs up the current data into a slot named:
##  expt$backup_expressionset. It will also save copies of each step along the way
##  in expt$normalized with the corresponding libsizes. Keep the libsizes in mind
##  when invoking limma.  The appropriate libsize is the non-log(cpm(normalized)).
##  This is most likely kept at:
##  'new_expt$normalized$intermediate_counts$normalization$libsizes'
##  A copy of this may also be found at:
##  new_expt$best_libsize
## Leaving the data unnormalized.  This is necessary for DESeq, but
##  EdgeR/limma might benefit from normalization.  Good choices include quantile,
##  size-factor, tmm, etc.
## Step 1: performing count filter with option: cbcb
## Removing 0 low-count genes (16485 remaining).
## Step 2: not normalizing the data.
## Step 3: converting the data with cpm.
## Step 4: transforming the data with log2.
## transform_counts: Found 269846 values equal to 0, adding 1 to the matrix.
## Step 5: doing batch correction with svaseq.
## Note to self:  If you get an error like 'x contains missing values' The data has too many 0's and needs a stronger low-count filter applied.
## Passing off to all_adjusters.
## batch_counts: Before batch/surrogate estimation, 511083 entries are x>1: 55.4%.
## batch_counts: Before batch/surrogate estimation, 269846 entries are x==0: 29.2%.
## batch_counts: Before batch/surrogate estimation, 142231 entries are 0<x<1: 15.4%.
## The be method chose 7 surrogate variable(s).
## Attempting svaseq estimation with 7 surrogates.
## There are 61691 (6.68%) elements which are < 0 after batch correction.
## Starting basic_pairwise().
## Starting basic pairwise comparison.
## Leaving the data alone, regardless of normalization state.
## Basic step 0/3: Transforming data.
## Basic step 1/3: Creating median and variance tables.
## Basic step 2/3: Performing 3 comparisons.
## Basic step 3/3: Creating faux DE Tables.
## Basic: Returning tables.
## Starting deseq_pairwise().
## Starting DESeq2 pairwise comparisons.
## About to round the data, this is a pretty terrible thing to do. But if you, like me, want to see what happens when you put non-standard data into deseq, then here you go.
## Warning in choose_binom_dataset(input, force = force): This data was
## inappropriately forced into integers.
## Including batch estimates from sva/ruv/pca in the model.
## Choosing the non-intercept containing model.
## DESeq2 step 1/5: Including a matrix of batch estimates in the deseq model.
## converting counts to integer mode
## DESeq2 step 2/5: Estimate size factors.
## DESeq2 step 3/5: Estimate dispersions.
## gene-wise dispersion estimates
## mean-dispersion relationship
## final dispersion estimates
## Using a parametric fitting seems to have worked.
## DESeq2 step 4/5: nbinomWaldTest.
## Starting ebseq_pairwise().
## The data should be suitable for EdgeR/DESeq/EBSeq. If they freak out, check the state of the count table and ensure that it is in integer counts.
## Starting EBSeq pairwise subset.
## Choosing the non-intercept containing model.
## Starting EBTest of no vs. yes.
## Copying ppee values as ajusted p-values until I figure out how to deal with them.
## Starting edger_pairwise().
## Starting edgeR pairwise comparisons.
## About to round the data, this is a pretty terrible thing to do. But if you, like me, want to see what happens when you put non-standard data into deseq, then here you go.
## Warning in choose_binom_dataset(input, force = force): This data was
## inappropriately forced into integers.
## Including batch estimates from sva/ruv/pca in the model.
## Choosing the non-intercept containing model.
## EdgeR step 1/9: Importing and normalizing data.
## EdgeR step 2/9: Estimating the common dispersion.
## EdgeR step 3/9: Estimating dispersion across genes.
## EdgeR step 4/9: Estimating GLM Common dispersion.
## EdgeR step 5/9: Estimating GLM Trended dispersion.
## EdgeR step 6/9: Estimating GLM Tagged dispersion.
## EdgeR step 7/9: Running glmFit, switch to glmQLFit by changing the argument 'edger_test'.
## EdgeR step 8/9: Making pairwise contrasts.

## Starting limma_pairwise().
## Starting limma pairwise comparison.
## Leaving the data alone, regardless of normalization state.
## libsize was not specified, this parameter has profound effects on limma's result.
## Using the libsize from expt$best_libsize.
## Limma step 1/6: choosing model.
## Including batch estimates from sva/ruv/pca in the model.
## Choosing the non-intercept containing model.
## Limma step 2/6: running limma::voom(), switch with the argument 'which_voom'.
## Using normalize.method=quantile for voom.

## Warning in regularize.values(x, y, ties, missing(ties)): collapsing to
## unique 'x' values
## Limma step 3/6: running lmFit with method: ls.
## Limma step 4/6: making and fitting contrasts with no intercept. (~ 0 + factors)
## Limma step 5/6: Running eBayes with robust=FALSE and trend=FALSE.
## Limma step 6/6: Writing limma outputs.
## Limma step 6/6: 1/1: Creating table: yes_vs_no.  Adjust=BH
## Limma step 6/6: 1/2: Creating table: no.  Adjust=BH
## Limma step 6/6: 2/2: Creating table: yes.  Adjust=BH
## Comparing analyses.

## Writing a legend of columns.
## Printing a pca plot before/after surrogates/batch estimation.
## Warning in MASS::cov.trob(data[, vars]): Probable convergence failure
## Warning in MASS::cov.trob(data[, vars]): Probable convergence failure

## Warning in MASS::cov.trob(data[, vars]): Probable convergence failure

## Warning in MASS::cov.trob(data[, vars]): Probable convergence failure

## Warning in MASS::cov.trob(data[, vars]): Probable convergence failure

## Warning in MASS::cov.trob(data[, vars]): Probable convergence failure

## Warning in MASS::cov.trob(data[, vars]): Probable convergence failure

## Warning in MASS::cov.trob(data[, vars]): Probable convergence failure
## Working on 1/1: infection which is: yes/no.
## Found table with yes_vs_no
## 20181210 a pthread error in normalize.quantiles leads me to robust.
## Used Bon Ferroni corrected t test(s) between columns.
## Used Bon Ferroni corrected t test(s) between columns.
## Used Bon Ferroni corrected t test(s) between columns.
## Adding venn plots for infection.

## Limma expression coefficients for infection; R^2: 0.992; equation: y = 1.01x - 0.066
## Edger expression coefficients for infection; R^2: 0.99; equation: y = 1.01x - 0.12
## DESeq2 expression coefficients for infection; R^2: 0.991; equation: y = 0.995x - 0.0119
## Writing summary information.
## Attempting to add the comparison plot to pairwise_summary at row: 23 and column: 1
## Performing save of the workbook.

## Writing a legend of columns.
## The count is: 1 and the test is: limma.
## The count is: 2 and the test is: edger.
## The count is: 3 and the test is: deseq.
## The count is: 4 and the test is: ebseq.
## The count is: 5 and the test is: basic.
## Writing excel data according to limma for infection: 1/5.
## After (adj)p filter, the up genes table has 68 genes.
## After (adj)p filter, the down genes table has 37 genes.
## After fold change filter, the up genes table has 50 genes.
## After fold change filter, the down genes table has 31 genes.
## Printing significant genes to the file: excel/MmM0Lm4h_vs_MmPMNLm12h_sig_tables.xlsx
## 1/1: Creating significant table up_1limma_infection
## Writing excel data according to edger for infection: 1/5.
## After (adj)p filter, the up genes table has 350 genes.
## After (adj)p filter, the down genes table has 155 genes.
## After fold change filter, the up genes table has 193 genes.
## After fold change filter, the down genes table has 102 genes.
## Printing significant genes to the file: excel/MmM0Lm4h_vs_MmPMNLm12h_sig_tables.xlsx
## 1/1: Creating significant table up_1edger_infection
## Writing excel data according to deseq for infection: 1/5.
## After (adj)p filter, the up genes table has 287 genes.
## After (adj)p filter, the down genes table has 299 genes.
## After fold change filter, the up genes table has 163 genes.
## After fold change filter, the down genes table has 144 genes.
## Printing significant genes to the file: excel/MmM0Lm4h_vs_MmPMNLm12h_sig_tables.xlsx
## 1/1: Creating significant table up_1deseq_infection
## Writing excel data according to ebseq for infection: 1/5.
## After (adj)p filter, the up genes table has 288 genes.
## After (adj)p filter, the down genes table has 684 genes.
## After fold change filter, the up genes table has 173 genes.
## After fold change filter, the down genes table has 564 genes.
## Printing significant genes to the file: excel/MmM0Lm4h_vs_MmPMNLm12h_sig_tables.xlsx
## 1/1: Creating significant table up_1ebseq_infection
## Writing excel data according to basic for infection: 1/5.
## After (adj)p filter, the up genes table has 6 genes.
## After (adj)p filter, the down genes table has 0 genes.
## After fold change filter, the up genes table has 6 genes.
## After fold change filter, the down genes table has 0 genes.
## Printing significant genes to the file: excel/MmM0Lm4h_vs_MmPMNLm12h_sig_tables.xlsx
## 1/1: Creating significant table up_1basic_infection
## Adding significance bar plots.

I think this handles questions a through e?

R version 3.6.0 beta (2019-04-11 r76379)

Platform: x86_64-pc-linux-gnu (64-bit)

locale: LC_CTYPE=en_US.UTF-8, LC_NUMERIC=C, LC_TIME=en_US.UTF-8, LC_COLLATE=en_US.UTF-8, LC_MONETARY=en_US.UTF-8, LC_MESSAGES=en_US.UTF-8, LC_PAPER=en_US.UTF-8, LC_NAME=C, LC_ADDRESS=C, LC_TELEPHONE=C, LC_MEASUREMENT=en_US.UTF-8 and LC_IDENTIFICATION=C

attached base packages: parallel, stats, graphics, grDevices, utils, datasets, methods and base

other attached packages: edgeR(v.3.24.3), foreach(v.1.4.4), ruv(v.0.9.7), hpgltools(v.1.0), Biobase(v.2.42.0) and BiocGenerics(v.0.28.0)

loaded via a namespace (and not attached): tidyselect(v.0.2.5), lme4(v.1.1-21), htmlwidgets(v.1.3), RSQLite(v.2.1.1), AnnotationDbi(v.1.44.0), grid(v.3.6.0), BiocParallel(v.1.16.6), Rtsne(v.0.15), devtools(v.2.0.1), munsell(v.0.5.0), codetools(v.0.2-16), preprocessCore(v.1.45.0), withr(v.2.1.2), colorspace(v.1.4-1), GOSemSim(v.2.8.0), knitr(v.1.22), rstudioapi(v.0.10), stats4(v.3.6.0), Vennerable(v.3.1.0.9000), robustbase(v.0.93-4), DOSE(v.3.8.2), labeling(v.0.3), urltools(v.1.7.2), tximport(v.1.10.1), GenomeInfoDbData(v.1.2.0), polyclip(v.1.10-0), bit64(v.0.9-7), farver(v.1.1.0), rprojroot(v.1.3-2), xfun(v.0.6), R6(v.2.4.0), doParallel(v.1.0.14), GenomeInfoDb(v.1.18.2), locfit(v.1.5-9.1), bitops(v.1.0-6), fgsea(v.1.8.0), gridGraphics(v.0.3-0), DelayedArray(v.0.8.0), assertthat(v.0.2.1), scales(v.1.0.0), ggraph(v.1.0.2), nnet(v.7.3-12), enrichplot(v.1.2.0), gtable(v.0.3.0), sva(v.3.30.1), processx(v.3.3.0), rlang(v.0.3.3), genefilter(v.1.64.0), splines(v.3.6.0), rtracklayer(v.1.42.2), lazyeval(v.0.2.2), acepack(v.1.4.1), checkmate(v.1.9.1), europepmc(v.0.3), yaml(v.2.2.0), reshape2(v.1.4.3), GenomicFeatures(v.1.34.7), backports(v.1.1.3), qvalue(v.2.14.1), Hmisc(v.4.2-0), RBGL(v.1.58.2), clusterProfiler(v.3.10.1), tools(v.3.6.0), usethis(v.1.4.0), ggplotify(v.0.0.3), ggplot2(v.3.1.0), gplots(v.3.0.1.1), RColorBrewer(v.1.1-2), blockmodeling(v.0.3.4), sessioninfo(v.1.1.1), ggridges(v.0.5.1), Rcpp(v.1.0.1), plyr(v.1.8.4), base64enc(v.0.1-3), progress(v.1.2.0), zlibbioc(v.1.28.0), purrr(v.0.3.2), RCurl(v.1.95-4.12), ps(v.1.3.0), prettyunits(v.1.0.2), rpart(v.4.1-13), viridis(v.0.5.1), cowplot(v.0.9.4), S4Vectors(v.0.20.1), SummarizedExperiment(v.1.12.0), ggrepel(v.0.8.0), cluster(v.2.0.8), colorRamps(v.2.3), fs(v.1.2.7), variancePartition(v.1.12.3), magrittr(v.1.5), data.table(v.1.12.0), DO.db(v.2.9), openxlsx(v.4.1.0), triebeard(v.0.3.0), packrat(v.0.5.0), matrixStats(v.0.54.0), pkgload(v.1.0.2), hms(v.0.4.2), evaluate(v.0.13), xtable(v.1.8-3), pbkrtest(v.0.4-7), XML(v.3.98-1.19), readxl(v.1.3.1), IRanges(v.2.16.0), gridExtra(v.2.3), testthat(v.2.0.1), compiler(v.3.6.0), biomaRt(v.2.38.0), tibble(v.2.1.1), KernSmooth(v.2.23-15), crayon(v.1.3.4), minqa(v.1.2.4), htmltools(v.0.3.6), mgcv(v.1.8-28), corpcor(v.1.6.9), snow(v.0.4-3), Formula(v.1.2-3), geneplotter(v.1.60.0), tidyr(v.0.8.3), DBI(v.1.0.0), tweenr(v.1.0.1), MASS(v.7.3-51.3), boot(v.1.3-20), Matrix(v.1.2-17), readr(v.1.3.1), cli(v.1.1.0), quadprog(v.1.5-5), gdata(v.2.18.0), igraph(v.1.2.4), GenomicRanges(v.1.34.0), pkgconfig(v.2.0.2), registry(v.0.5-1), rvcheck(v.0.1.3), GenomicAlignments(v.1.18.1), foreign(v.0.8-71), xml2(v.1.2.0), annotate(v.1.60.1), rngtools(v.1.3.1), pkgmaker(v.0.27), XVector(v.0.22.0), bibtex(v.0.4.2), doRNG(v.1.7.1), EBSeq(v.1.22.1), stringr(v.1.4.0), callr(v.3.2.0), digest(v.0.6.18), graph(v.1.60.0), Biostrings(v.2.50.2), cellranger(v.1.1.0), rmarkdown(v.1.12), fastmatch(v.1.1-0), htmlTable(v.1.13.1), directlabels(v.2018.05.22), curl(v.3.3), Rsamtools(v.1.34.1), gtools(v.3.8.1), nloptr(v.1.2.1), nlme(v.3.1-137), jsonlite(v.1.6), desc(v.1.2.0), viridisLite(v.0.3.0), limma(v.3.38.3), pillar(v.1.3.1), lattice(v.0.20-38), DEoptimR(v.1.0-8), httr(v.1.4.0), pkgbuild(v.1.0.3), survival(v.2.44-1.1), GO.db(v.3.7.0), glue(v.1.3.1), remotes(v.2.0.2), zip(v.2.0.1), UpSetR(v.1.3.3), iterators(v.1.0.10), pander(v.0.6.3), bit(v.1.1-14), ggforce(v.0.2.1), stringi(v.1.4.3), blob(v.1.1.1), DESeq2(v.1.22.2), doSNOW(v.1.0.16), latticeExtra(v.0.6-28), caTools(v.1.17.1.2), memoise(v.1.1.0) and dplyr(v.0.8.0.1)

## If you wish to reproduce this exact build of hpgltools, invoke the following:
## > git clone http://github.com/abelew/hpgltools.git
## > git reset 0ebb3165f07d676a83da460824f337251efbcf69
## This is hpgltools commit: Fri Apr 12 15:03:48 2019 -0400: 0ebb3165f07d676a83da460824f337251efbcf69
---
title: "20190417 An attempt to answer one of the big questions from Najib."
author: "atb abelew@gmail.com"
date: "`r Sys.Date()`"
output:
  html_document:
    code_download: true
    code_folding: show
    fig_caption: true
    fig_height: 7
    fig_width: 7
    highlight: tango
    keep_md: false
    mode: selfcontained
    number_sections: true
    self_contained: true
    theme: readable
    toc: true
    toc_float:
      collapsed: false
      smooth_scroll: false
  rmdformats::readthedown:
    code_download: true
    code_folding: show
    df_print: paged
    fig_caption: true
    fig_height: 7
    fig_width: 7
    highlight: tango
    width: 300
    keep_md: false
    mode: selfcontained
    toc_float: true
  BiocStyle::html_document:
    code_download: true
    code_folding: show
    fig_caption: true
    fig_height: 7
    fig_width: 7
    highlight: tango
    keep_md: false
    mode: selfcontained
    toc_float: true
---

<style type="text/css">
body, td {
  font-size: 16px;
}
code.r{
  font-size: 16px;
}
pre {
 font-size: 16px
}
</style>

```{r options, include=FALSE}
library(hpgltools)
tt <- sm(devtools::load_all("~/hpgltools"))
knitr::opts_knit$set(progress=TRUE,
                     verbose=TRUE,
                     width=120,
                     echo=TRUE)
knitr::opts_chunk$set(error=TRUE,
                      fig.width=8,
                      fig.height=8,
                      dpi=96)
old_options <- options(digits=4,
                       stringsAsFactors=FALSE,
                       knitr.duplicate.label="allow")
ggplot2::theme_set(ggplot2::theme_bw(base_size=12))
ver <- "20190417"
rundate <- format(Sys.Date(), format="%Y%m%d")
rmd_file <- "20190417_HsMm_M0Lm4h.Rmd"
```

This document will first explore differentially expressed genes in humans 4
hours after infection followed by the same question in mice.

# Which genes are DE in human macrophages at 4 hours upon infection with L. major?

# Gather annotation data

I want to perform a series of comparisons among the host cells: human and mouse.
Thus I need to collect annotation data for both species and get the set of
orthologs between them.

## Start with the human annotation data

In the following block, I download the human annotations from biomart. In
addition, I take a moment to recreate the transcript IDs as observed in the
salmon count tables (yes, I know they are not actually count tables).  Finally,
I create a table which maps transcripts to genes, this will be used when we
generate the expressionset so that we get gene expression levels from
transcripts via the R package 'tximport'.

```{r human_annotations}
hs_annot <- load_biomart_annotations()$annotation
rownames(hs_annot) <- make.names(
  paste0(hs_annot[["ensembl_transcript_id"]], ".",
         hs_annot[["transcript_version"]]),
  unique=TRUE)
hs_tx_gene <- hs_annot[, c("ensembl_gene_id", "ensembl_transcript_id")]
hs_tx_gene[["id"]] <- rownames(hs_tx_gene)
hs_tx_gene <- hs_tx_gene[, c("id", "ensembl_gene_id")]
new_hs_annot <- hs_annot
rownames(new_hs_annot) <- make.names(hs_annot[["ensembl_gene_id"]], unique=TRUE)
```

## Generate expressionsets

The question is reasonably self-contained.  I want to compare the uninfected
human samples against any samples which were infected for 4 hours.
So let us first pull those samples and then poke at them a bit.

The following block creates an expressionset using all human-quantified
samples. As mentioned previously, it uses the table of transcript<->gene
mappings, and the biomart annotations.

Given this set of ~440 samples, it then drops the following:

1.  All samples marked 'skipped'.
2.  All samples which are not from time 't4h'.

and resets the condition and batch factors to the 'infection state' metadatum
and 'study', respectively.

```{r expts}
sample_sheet <- "sample_sheets/leishmania_host_metasheet_20190401.xlsx"
hs_expt <- create_expt(sample_sheet,
                       file_column="hsapiensfile",
                       gene_info=new_hs_annot,
                       tx_gene_map=hs_tx_gene)

hs_expt_noskipped <- subset_expt(hs_expt, subset="skipped!='yes'")
hs_t4h_expt <- subset_expt(hs_expt_noskipped, subset="expttime=='t4h'")
hs_t4h_expt <- set_expt_conditions(hs_t4h_expt, fact="infectstate")
hs_t4h_expt <- set_expt_batches(hs_t4h_expt, fact="study")
table(hs_t4h_expt$conditions)
table(hs_t4h_expt$batches)
hs_written <- write_expt(hs_t4h_expt, excel="excel/HsM0Lm4h_expt.xlsx")
```

## Examine t4h vs uninfected

Let us perform some generic metrics of the t4h human expressionset.  As per
usual, I plot the metrics first of the raw data; followed by the same metrics of
log2(quantile(cpm(sva(filtered(data))))).

```{r hs_examine_t4h, fig.show='hide'}
hs_t4h_plots <- sm(graph_metrics(hs_t4h_expt))

hs_t4h_norm <- normalize_expt(hs_t4h_expt, norm="quant", convert="cpm",
                              transform="log2", filter=TRUE, batch="svaseq")
hs_t4h_norm_plots <- sm(graph_metrics(hs_t4h_norm))
```

### Print some of the plots

```{r hs_examine_plots}
hs_t4h_plots$legend
hs_t4h_plots$libsize
hs_t4h_plots$boxplot

hs_t4h_norm_plots$pc_plot
```

## Remove stimulated samples

I perhaps should have removed the stimulated samples sooner, but I was curious
to see their effect on the distribution first.

```{r hs_t4h_nounstim}
hs_t4h_inf <- subset_expt(hs_t4h_expt, subset="condition!='stim'")
hs_t4h_inf_norm <- normalize_expt(hs_t4h_inf, transform="log2", convert="cpm",
                                  filter=TRUE, batch="svaseq")

hs_t4h_pca <- plot_pca(hs_t4h_inf_norm, plot_title="H. sapiens, L. major, t4h")
hs_t4h_pca$plot

keepers <- list("infection" = c("yes", "no"))
hs_t4h_de <- all_pairwise(hs_t4h_inf, model_batch="svaseq", filter=TRUE, force=TRUE)
hs_t4h_table <- combine_de_tables(hs_t4h_de, keepers=keepers,
                                  excel="excel/HsM0Lm4h_de_tables.xlsx")
hs_t4h_sig <- extract_significant_genes(hs_t4h_table, excel="excel/HsM0Lm4h_sig_tables.xlsx")
```

# Which genes are DE in mouse macrophages at 4 hours upon infection with L. major?

Most of this should be the same in process as what was performed for the human.

## Gather annotation data

I want to perform a series of comparisons among the host cells: human and mouse.
Thus I need to collect annotation data for both species and get the set of
orthologs between them.

### Start with the human annotation data

```{r mouse_annotations}
mm_annot <- load_biomart_annotations(species="mmusculus")$annotation
rownames(mm_annot) <- make.names(
  paste0(mm_annot[["ensembl_transcript_id"]], ".",
         mm_annot[["transcript_version"]]),
  unique=TRUE)
mm_tx_gene <- mm_annot[, c("ensembl_gene_id", "ensembl_transcript_id")]
mm_tx_gene[["id"]] <- rownames(mm_tx_gene)
mm_tx_gene <- mm_tx_gene[, c("id", "ensembl_gene_id")]
new_mm_annot <- mm_annot
rownames(new_mm_annot) <- make.names(mm_annot[["ensembl_gene_id"]], unique=TRUE)
```

## Generate expressionsets

The question is reasonably self-contained.  I want to compare the uninfected
human samples against any samples which were infected for 4 hours.
So let us first pull those samples and then poke at them a bit.

```{r mouse_expts}
mm_expt <- create_expt(sample_sheet,
                       file_column="mmusculusfile",
                       gene_info=new_mm_annot,
                       tx_gene_map=mm_tx_gene)
mm_t4h_expt <- subset_expt(mm_expt, subset="expttime=='t4h'")
mm_t4h_expt <- set_expt_conditions(mm_t4h_expt, fact="infectstate")
table(mm_t4h_expt$conditions)
table(mm_t4h_expt$batches)
mm_written <- write_expt(mm_t4h_expt, excel="excel/MmM0Lm4h_expt.xlsx")
```

## Examine t4h vs uninfected

```{r examine_t4h, fig.show='hide'}
mm_t4h_plots <- sm(graph_metrics(mm_t4h_expt))
mm_t4h_norm <- normalize_expt(mm_t4h_expt, norm="quant", convert="cpm",
                              transform="log2", filter=TRUE, batch="svaseq")
mm_t4h_norm_plots <- sm(graph_metrics(mm_t4h_norm))
```

### Print some of the plots

```{r mm_examine_plots}
mm_t4h_plots$legend
mm_t4h_plots$libsize
mm_t4h_plots$boxplot

mm_t4h_norm_plots$pc_plot
```

## Perform de analyses

```{r mm_t4h_de}
mm_t4h_inf_norm <- normalize_expt(mm_t4h_expt, transform="log2", convert="cpm",
                                  filter=TRUE, batch="svaseq")

mm_t4h_pca <- plot_pca(mm_t4h_inf_norm, plot_title="M. musculus, L. major, t4h")
mm_t4h_pca$plot

mm_t4h_de <- all_pairwise(mm_t4h_expt, model_batch="svaseq", filter=TRUE, force=TRUE)
mm_t4h_table <- combine_de_tables(mm_t4h_de, keepers=keepers,
                                  excel="excel/MmM0Lm4h_de_tables.xlsx")
mm_t4h_sig <- extract_significant_genes(mm_t4h_table,
                                        excel="excel/MmM0Lm4h_sig_tables.xlsx")
```

## Compare this to the previous result.

Let us see if our human differential expression result is similar to that
obtained in Table S2.

```{r compare_previous_hs}
previous_hs <- readxl::read_excel("excel/inline-supplementary-material-5.xls", sheet=2)
previous_hs_lfc <- previous_hs[, c("ID", "Fold change")]
neg_idx <- previous_hs_lfc[[2]] < 0
previous_hs_lfc[neg_idx, 2] <- -1 * (1 / previous_hs_lfc[neg_idx, 2])
previous_hs_lfc[[2]] <- log2(previous_hs_lfc[[2]])

merged <- merge(previous_hs_lfc, hs_t4h_table$data[[1]], by.x="ID", by.y="row.names")
cor.test(merged[["limma_logfc"]], merged[["Fold change"]])
```

## Compare the previous mouse and these mouse results.

```{r compare_previous_mm}
previous_mm <- readxl::read_excel("excel/12864_2015_2237_MOESM3_ESM.xls", sheet=2, skip=1)
previous_mm_lfc <- previous_mm[, c("ID", "Fold change")]
neg_idx <- previous_mm_lfc[[2]] < 0
previous_mm_lfc[neg_idx, 2] <- -1 * (1 / previous_mm_lfc[neg_idx, 2])
previous_mm_lfc[[2]] <- log2(previous_mm_lfc[[2]])

merged <- merge(previous_mm_lfc, mm_t4h_table$data[[1]], by.x="ID", by.y="row.names")
cor.test(merged[["limma_logfc"]], merged[["Fold change"]])
```

# What genes are shared in the mouse and human data?

This is one method of addressing Najibs big question #1c.  In this method, I am
taking the tables for the human analysis and mouse analysis separately, then
merging them using a table of orthologs.

Side note: a different way of addressing this question resides in
20190220_host_comparisons.Rmd. In this competing method, the table of orthologs
is used in the beginning to make a single set of IDs for the human and mouse
genes, then perform the differential expression analysis.

## Extract human mouse orthologs

My load_biomart_orthologs() function should provide this mapping gene ID table.

```{r mm_hs_ortholog}
## The defaults of this function are suitable for mouse/human queries.
mm_hs_ortho <- load_biomart_orthologs()$all_linked_genes

mm_table <- mm_t4h_table$data[[1]]
hs_table <- hs_t4h_table$data[[1]]

mm_table <- merge(mm_hs_ortho, mm_table, by.x="mmusculus", by.y="row.names", all.y=TRUE)
hs_table <- merge(mm_hs_ortho, hs_table, by.x="hsapiens", by.y="row.names", all.y=TRUE)
both_table_hs <- merge(hs_table, mm_table, by.x="hsapiens", by.y="hsapiens")
both_table_mm <- merge(hs_table, mm_table, by.x="mmusculus", by.y="mmusculus")

cor.test(both_table_hs[["limma_logfc.x"]], both_table_hs[["limma_logfc.y"]])
cor.test(both_table_mm[["limma_logfc.x"]], both_table_mm[["limma_logfc.y"]])
tt <- plot_scatter(both_table_hs[, c("limma_logfc.x", "limma_logfc.y")])
tt
```

I believe these both_table_hs and both_table_mm tables are good candidates for
the set of genes which are shared across the human and mouse samples, from the
perspective of the human.

Separately

Now lets write some of these out.

```{r write_shared_tables}
shared_hsmm_up_idx <- both_table_hs[["deseq_logfc.x"]] > 1 &
  both_table_hs[["deseq_logfc.y"]] > 1 &
  both_table_hs[["deseq_adjp.x"]] <= 0.05 &
  both_table_hs[["deseq_adjp.y"]] <= 0.05
shared_hsmm_up <- both_table_hs[shared_hsmm_up_idx, ]
written <- write_xls(data=shared_hsmm_up,
                     excel="excel/HsM0Lm4h_vs_MmM0Lm4h_shared_up_hs.xlsx")
shared_mmhs_up_idx <- both_table_mm[["deseq_logfc.x"]] > 1 &
  both_table_mm[["deseq_logfc.y"]] > 1 &
  both_table_mm[["deseq_adjp.x"]] <= 0.05 &
  both_table_mm[["deseq_adjp.y"]] <= 0.05
shared_mmhs_up <- both_table_mm[shared_mmhs_up_idx, ]
written <- write_xls(data=shared_mmhs_up,
                     excel="excel/HsM0Lm4h_vs_MmM0Lm4h_shared_up_mm.xlsx")

shared_hsmm_down_idx <- both_table_hs[["deseq_logfc.x"]] < -1 &
  both_table_hs[["deseq_logfc.y"]] < -1 &
  both_table_hs[["deseq_adjp.x"]] <= 0.05 &
  both_table_hs[["deseq_adjp.y"]] <= 0.05
shared_hsmm_down <- both_table_hs[shared_hsmm_down_idx, ]
written <- write_xls(data=shared_hsmm_down,
                     excel="excel/HsM0Lm4h_vs_MmM0Lm4h_shared_down_hs.xlsx")
shared_mmhs_down_idx <- both_table_mm[["deseq_logfc.x"]] < -1 &
  both_table_mm[["deseq_logfc.y"]] < -1 &
  both_table_mm[["deseq_adjp.x"]] <= 0.05 &
  both_table_mm[["deseq_adjp.y"]] <= 0.05
shared_mmhs_down <- both_table_mm[shared_mmhs_down_idx, ]
written <- write_xls(data=shared_mmhs_down,
                     excel="excel/HsM0Lm4h_vs_MmM0Lm4h_shared_down_mm.xlsx")
```

## Compare the above with Table S7 from the Laura and Cecilia paper.

Table S7 has a set of genes from human which are also up-regulated in mouse upon
infection.

```{r open_figs7}
fig_s7_up <- readxl::read_excel("excel/inline-supplementary-material-7.xls", sheet=3)
fig_s7_hs_up <- unique(fig_s7_up[[5]])

fig_s7_down <- readxl::read_excel("excel/inline-supplementary-material-7.xls", sheet=4)
fig_s7_hs_down <- unique(fig_s7_down[[5]])

both_up_idx <- both_table_hs[["limma_logfc.x"]] >= 0.8 &
  both_table_hs[["limma_logfc.y"]] >= 0.8 &
  both_table_hs[["limma_adjp.x"]] <= 0.1 &
  both_table_hs[["limma_adjp.y"]] <= 0.1
both_up_ids <- both_table_hs[both_up_idx, "hsapiens"]
up_venn <- Vennerable::Venn(Sets=list("figs7" = fig_s7_hs_up, "tables" = both_up_ids))
Vennerable::plot(up_venn)

both_down_idx <- both_table_hs[["limma_logfc.x"]] <= -0.8 &
  both_table_hs[["limma_logfc.y"]] <= -0.8 &
  both_table_hs[["limma_adjp.x"]] <= 0.1 &
  both_table_hs[["limma_adjp.y"]] <= 0.1
both_down_ids <- both_table_hs[both_down_idx, "hsapiens"]
down_venn <- Vennerable::Venn(Sets=list("figs7" = fig_s7_hs_down, "tables" = both_down_ids))
Vennerable::plot(down_venn)
```

# Next question: Which 4 hour genes are shared with CIDEIM?

"Which 4 hour DE genes are shared with the CIDEIM panamensis infected human
macrophages?"

So, once again there are two ways of approaching this question:

1.  Examine the question of infected vs. uninfected for the two data sets
    separately.  Then query the results and see what comes out.
2.  Examine this as a single question using the union of both data sets.

Since I already have a putative answer for the human 4 hour macrophage data, The
simplest method is to just look at the cideim data and do #1 above.  So let us
do that first.

```{r cideim_separate}
cideim_macr <- subset_expt(hs_expt, subset="lab=='tmrc'&hostcelltype=='macrophage'")
cideim_macr <- set_expt_conditions(cideim_macr, fact="infectstate")
cideim_norm <- normalize_expt(cideim_macr, transform="log2", convert="cpm",
                              norm="quant", filter="simple", batch="svaseq")

plot_pca(cideim_norm)$plot

cideim_de <- all_pairwise(cideim_macr, filter=TRUE, parallel=FALSE, force=TRUE)
cideim_table <- combine_de_tables(cideim_de, keepers=keepers)

cideim_merged <- merge(hs_t4h_table$data[[1]], cideim_table$data[[1]], by="row.names")
cor.test(cideim_merged[["deseq_logfc.x"]], cideim_merged[["deseq_logfc.y"]])

plot_scatter(cideim_merged[, c("limma_logfc.x", "limma_logfc.y")])
```

## Try again as one big expressionset

```{r cideim_big_exprs}
hs_expt_noskipped <- subset_expt(hs_expt, subset="skipped!='yes'")
hs_t4h_expt <- subset_expt(hs_expt_noskipped, subset="expttime=='t4h'")
hs_t4h_expt <- set_expt_conditions(hs_t4h_expt, fact="infectstate")
cideim_t4h <- subset_expt(
  hs_expt,
  subset="(skipped!='yes'&expttime=='t4h')|(lab=='tmrc'&hostcelltype=='macrophage')")
cideim_t4h <- set_expt_conditions(cideim_t4h, fact="infectstate")

cideim_t4h_norm <- normalize_expt(cideim_t4h, filter=TRUE, norm="quant",
                                  convert="cpm", transform="log2")
plot_pca(cideim_t4h_norm)$plot
cideim_t4h_normbatch <- normalize_expt(cideim_t4h, filter=TRUE, norm="quant",
                                       convert="cpm", batch="svaseq", transform="log2")
plot_pca(cideim_t4h_normbatch)$plot

cideim_t4h_de <- all_pairwise(cideim_t4h, model_batch="svaseq",
                              filter=TRUE, force=TRUE, parallel=FALSE)
cideim_t4h_tables <- combine_de_tables(cideim_t4h_de, keepers=keepers,
                                       excel="excel/HsM0Lm4h_vs_HsM0Lp_de_tables.xlsx")
cideim_t4h_sig <- combine_de_tables(cideim_t4h_tables,
                                    excel="excel/HsM0Lm4h_vs_HsM0Lp_sig_tables.xlsx")
```

# Next, overlap between macrophages and neutrophils

All of the neutrophil data is in mouse, apparently.  This will make it more
difficult, perhaps impossible to get an accurate answer.

So instead, look at infection vs. uninfected in mouse and then compare to the
earliest Sacks' timepoints in neutrophils.

```{r neutrophil_macr_mouse}
subset <- "(hostcelltype=='PMN'&host=='mus_musculus'&expttime=='t12h') |
           (hostcelltype=='macrophage'&host=='mus_musculus')"
neut_macr_mus <- subset_expt(mm_expt, subset=subset)
neut_macr_mus <- subset_expt(neut_macr_mus, subset="infectstate!='stim'")
neut_macr_mus <- set_expt_conditions(neut_macr_mus, fact="infectstate")
neut_macr_mus <- set_expt_batches(neut_macr_mus, fact="hostcelltype")
neut_macr_mus_norm <- normalize_expt(neut_macr_mus, convert="cpm",
                                     norm="quant", filter=TRUE)
plot_pca(neut_macr_mus_norm)$plot

neut_macr_mus_normbatch <- normalize_expt(neut_macr_mus, convert="cpm",
                                          norm="quant", filter=TRUE, batch="svaseq")
plot_pca(neut_macr_mus_normbatch)$plot

neut_macr_mus_filt <- normalize_expt(neut_macr_mus, filter="simple")
neut_macr_mus_de <- all_pairwise(neut_macr_mus_filt, parallel=FALSE,
                                 force=TRUE, model_batch="svaseq")
neut_macr_mus_table <- combine_de_tables(neut_macr_mus_de, keepers=keepers,
                                         excel="excel/MmM0Lm4h_vs_MmPMNLm12h_de_tables.xlsx")
neut_macr_mus_sig <- extract_significant_genes(neut_macr_mus_table,
                                         excel="excel/MmM0Lm4h_vs_MmPMNLm12h_sig_tables.xlsx")
```

I think this handles questions a through e?

```{r saveme}
pander::pander(sessionInfo())
message("This is hpgltools commit: ", get_git_commit())
## message(paste0("Saving to ", savefile))
## tmp <- sm(saveme(filename=savefile))
```
