SEQanswers

Go Back   SEQanswers > General



Similar Threads
Thread Thread Starter Forum Replies Last Post
error with sam output ->Parse error at line xxxxx: missing colon in auxiliary data manore Bioinformatics 11 11-25-2013 02:50 PM
MEGAN from the command line oTrout Bioinformatics 4 10-30-2012 05:31 AM
Want to use extract_genomic_dna in command line louis7781x Bioinformatics 2 12-04-2011 06:51 AM
SAMtools command line ??? Pawan Noel Bioinformatics 6 11-16-2010 11:42 AM
SIFT on the command line lamasmi Bioinformatics 2 08-17-2010 10:32 AM

Reply
 
Thread Tools
Old 12-10-2012, 12:39 PM   #1
gwilymh
Member
 
Location: Milwaukee

Join Date: Dec 2011
Posts: 72
Default Importing and processing data in R line by line

I am analyzing large datasets in R. To analyze data, my current practice is to import the entire dataset into the R workspace using the read.table() function. Rather than importing the entire dataset, however, I was wondering if it is possible to import, analyze and export each line of data individually so that the analysis would take up less computer memory.

Can this be done? And if so, how?
gwilymh is offline   Reply With Quote
Old 12-11-2012, 12:14 AM   #2
dariober
Senior Member
 
Location: Cambridge, UK

Join Date: May 2010
Posts: 311
Default

Quote:
Originally Posted by gwilymh View Post
I am analyzing large datasets in R. To analyze data, my current practice is to import the entire dataset into the R workspace using the read.table() function. Rather than importing the entire dataset, however, I was wondering if it is possible to import, analyze and export each line of data individually so that the analysis would take up less computer memory.

Can this be done? And if so, how?
Hi- R is designed to read all the datafile in one go. Reading line by line might be possible but is probably going to be horribly slow. However, instead of reading line by line you could read-in chunks of several lines in a loop like (pseudocode):

Code:
totlines<- 10000000 ## Number of lines in your big input. Get it from wc -l
skip<- 0
chunkLines= 10000 ## No. of lines to read in one go. Set to 1 to really read one line at a time.
while (skip < totlines){
    df<- read.table(myinput, skip= skip, nrows= chunkLines, stringsAsFactors= FALSE)
    skip<- skip + chunkLines
    [...do something with df...]
}
Essentially you use args skip and nrows to read chunks of lines. To speed-up read.table set stringsAsFactors to false.

A better alternative might be to use packages designed for dealing with data larger than memory, ff (http://cran.r-project.org/web/packages/ff/index.html) is one of them.

Hope this helps!

Dario
dariober is offline   Reply With Quote
Old 12-11-2012, 07:05 AM   #3
gwilymh
Member
 
Location: Milwaukee

Join Date: Dec 2011
Posts: 72
Default

Thanks Dario, much appreciated.
gwilymh is offline   Reply With Quote
Old 12-11-2012, 07:22 AM   #4
nexgengirl
Member
 
Location: Maryland

Join Date: Apr 2010
Posts: 31
Default

Check out the readLines function in R.
nexgengirl is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 10:16 PM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2021, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO