I have 6 samples, single-end RNAseq runs from a SOLiD 4 machine, and mapped using BioScope 1.3. These are a set of 3 treatment animals and 3 controls (mice). These were barcoded samples (2 slides), 50bp reads.
My aim is to use CuffDiff (1.2.1) to look at differential expression (and I want to compare to DESeq, and maybe other tools as well). But I am wondering what exactly I should enter for the mean fragment length and standard deviation. I've read the posts mentioning this should be the reads plus the adapters, but that makes no sense to me given the BAM files I have from BioScope. The length of the reads plus adapters, barcode and so forth is 127bp, which means nothing relative to what is actually in my input BAM files for this analysis.
Taking those BAM files (the wt.sr.bam files from BioScope) I used FastQC to get the fragment length distribution for each wt.sr.bam file produced from the BioScope 1.3 mapping runs, dumped the data from FastQC into JMP Genomics and pulled up the distribution stats.
Sample Mean Length Std. Dev.
Treatment -1: 46.71 6.31 (~71.9 million map'd reads)
Treatment -2: 46.59 6.42 (~72.4 million map'd reads)
Treatment -3: 45.96 7.26 (~77.5 million map'd reads)
Control -1: 45.41 7.34 (~76.0 million map'd reads)
Control -2: 45.66 7.36 (~171.6 million map'd reads)
Control -3: 46.26 6.69 (~109.4 million map'd reads)
Combined samples: 46.027 6.997
Now, it seems to me I should be using the values for the combined samples in CuffDiff, as that is derived directly from the reads which CuffDiff will be chewing on? I was thinking I'd set "-m 46 -s 7" in the CuffDiff command.
Does that seem logical?
If not, just how does one come up with the fragment length and standard deviation for single end reads?
Thanks, Michael
My aim is to use CuffDiff (1.2.1) to look at differential expression (and I want to compare to DESeq, and maybe other tools as well). But I am wondering what exactly I should enter for the mean fragment length and standard deviation. I've read the posts mentioning this should be the reads plus the adapters, but that makes no sense to me given the BAM files I have from BioScope. The length of the reads plus adapters, barcode and so forth is 127bp, which means nothing relative to what is actually in my input BAM files for this analysis.
Taking those BAM files (the wt.sr.bam files from BioScope) I used FastQC to get the fragment length distribution for each wt.sr.bam file produced from the BioScope 1.3 mapping runs, dumped the data from FastQC into JMP Genomics and pulled up the distribution stats.
Sample Mean Length Std. Dev.
Treatment -1: 46.71 6.31 (~71.9 million map'd reads)
Treatment -2: 46.59 6.42 (~72.4 million map'd reads)
Treatment -3: 45.96 7.26 (~77.5 million map'd reads)
Control -1: 45.41 7.34 (~76.0 million map'd reads)
Control -2: 45.66 7.36 (~171.6 million map'd reads)
Control -3: 46.26 6.69 (~109.4 million map'd reads)
Combined samples: 46.027 6.997
Now, it seems to me I should be using the values for the combined samples in CuffDiff, as that is derived directly from the reads which CuffDiff will be chewing on? I was thinking I'd set "-m 46 -s 7" in the CuffDiff command.
Does that seem logical?
If not, just how does one come up with the fragment length and standard deviation for single end reads?
Thanks, Michael
Comment