Go Back   SEQanswers > Bioinformatics > Bioinformatics

Similar Threads
Thread Thread Starter Forum Replies Last Post
collapse replicates in rows dawn1313 Bioinformatics 0 10-31-2016 06:47 PM
Tophat-fusion, fusions.out why do some rows appear twice? estybri Bioinformatics 0 03-03-2015 02:35 AM
DESeq2 - COLUMN and ROWS number don't match KYR RNA Sequencing 1 07-16-2014 02:21 PM
DESeq2 beta prior ysnapus Bioinformatics 3 03-07-2014 01:13 PM

Thread Tools
Old 01-12-2018, 12:27 AM   #1
Junior Member
Location: Helsinki

Join Date: Apr 2013
Posts: 1
Default DESeq2: rows did not converge in beta

I'm trying to run a rather complicated model in DESeq2 with 16S microbiome data. More specifically, the model is "~ confounder + diseaseStatus:subject + timepoint*diseaseStatus" (a two-timepoint case-control comparison, with the same subjects at both timepoints; all variables are factors, except for the confounder, which is numeric). Probably because of how complicated this is, I keep running into the "rows did not converge in beta, labelled in mcols(object)$betaConv. Use larger maxit argument with nbinomWaldTest" issue. Now, this wouldn't be such a big problem otherwise, but many of the rows that don't converge in beta represent microbial taxa that I am very much interested in, and would like to have results for.

First of all, the output still includes a full set of results, complete with p-values, for the rows that were labeled as non-converging. Are these good for anything, or should I just ignore them completely? Some old discussions I found on this topic suggest deleting all the non-converging rows from the output. The DESeq2 documentation mentions a useOptim parameter, "whether to use the native optim function on rows which do not converge within maxit", for nbinomWaldTest, which is by default TRUE, and I assume is relevant to what's going on here, but I don't really understand what this means.

Secondly, is there any way to get around this issue? I've looked at everything google/earlier conversations suggest as solutions for this, which mostly comes down to trimming the data and increasing the maxit value for nbinomWaldTest. I've tried both, and neither trimming the data aggressively nor increasing the maxit several orders of magnitude (from the default 100 all the way to 1 000 000) help lower the number of non-converging rows. I'd be extremely thankful for any additional ideas that might help.

(I guess it might come down to "run a simpler model". I'm just fond of running it as one comparison using all the samples at once, instead of subsetting it by timepoint or some such.)
Welmu is offline   Reply With Quote

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

All times are GMT -8. The time now is 10:59 AM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2019, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO