SEQanswers

Go Back   SEQanswers > Events / Conferences



Similar Threads
Thread Thread Starter Forum Replies Last Post
ChIP-Seq: Enabling Data Analysis on High-Throughput Data in Large Data Depository Usi Newsbot! Literature Watch 1 04-18-2018 09:50 PM
High throughput sequence analysis tools and approaches with Bioconductor apratap Events / Conferences 0 10-20-2009 10:44 AM
Workshop: Informatics on High Throughput Sequencing strob Events / Conferences 1 04-14-2009 07:21 PM

Reply
 
Thread Tools
Old 09-19-2013, 02:41 AM   #1
gvoisin
Junior Member
 
Location: CORK (IRLANDE)

Join Date: May 2011
Posts: 3
Default Bioconductor/R workshop for high-throughput data analysis (web version)

Hello,

Boasting a large collection of software for the analysis of biological data, Bioconductor and R are invaluable tools for data management, pipeline production, and analysis.
To help biologists tackle the challenge of understanding and learning how to use these tools, BSC is offering an online workshop on the analysis of high-throughput data, with an emphasis on microarray and next generation sequencing technologies.

To learn more, visit us at

http://bioinformatics-sc.com/workshop.ws

The BSC team
gvoisin is offline   Reply With Quote
Old 10-12-2013, 10:13 AM   #2
danielecook
Junior Member
 
Location: Chicago

Join Date: Oct 2013
Posts: 8
Default

Hi -

I've got a question regarding using Bioconductor. I've been struggling with Bioconductor for the past week - I was under the impression that Bioconductor could be used for 'high-throughput' analysis on large files (500MB+) without any trouble on my laptop.

Specifically - I am trying to do ChIP-seq analysis on a genome wide scale. Unfortunately, I haven't even been able to get the files to load. R tries to load them in their entirely into RAM - I guess I thought it would work with the files without necessarily loading them into memory.

What strategies do people use to work with large files in Bioconductor? Is there some major thing I am missing here?

Thanks!
Dan
danielecook is offline   Reply With Quote
Old 10-12-2013, 03:17 PM   #3
dpryan
Devon Ryan
 
Location: Freiburg, Germany

Join Date: Jul 2011
Posts: 3,480
Default

R does have the unfortunate habit of trying to load large files into RAM. If you're trying to load the raw sequences or even the full BAM file then you're going to have problems. Process all of the reads outside of R (also, call peaks there). Just load the peaks into R as this file should be much more manageable for you.

BTW, in the future you might get a little better feedback if you stated exactly what's not working well. What are you trying to load and how big is it? How much RAM do you have? You won't get far if you don't give people the information they need to help you.

Last edited by dpryan; 10-12-2013 at 03:21 PM.
dpryan is offline   Reply With Quote
Old 10-13-2013, 09:11 AM   #4
gvoisin
Junior Member
 
Location: CORK (IRLANDE)

Join Date: May 2011
Posts: 3
Default

Hi, Personnally, with a I7 intel + 8 Gig in Memory, I'm able to perform analysis for large microarray data.
But, i think that , with the high throughput data like NGS data, you need to server with many nodes + many Gig for the ram. It's not possible to process them with your laptop. the HT data needs High level tools.
Hence, amazon server is one solution, or in your instistution ,you have, may be, a compute "grapp" , composed of many nodes and RAM.

My strategie, I run my code on a server for the greedy step. like algnment .., I save the R object, I load on my personnal computer to go on my analysis. Normally, all analysis could be processed on a server but with the task ordonnancor ( i'm not sure of this terms in english), you have some constraints

When you have a big data, think a big computer. you loss your times to process your data on your computer. Moreover, the time machine is not expensive ( if you compared to the price to generate the HT data )
gvoisin is offline   Reply With Quote
Old 10-13-2013, 09:51 AM   #5
danielecook
Junior Member
 
Location: Chicago

Join Date: Oct 2013
Posts: 8
Smile Thanks for the tips

I appreciate the feedback. We do have a supercomputer here I can make use of, and I do have some experience with Amazon in the cloud.

Actually - the data I am working with is already aligned (it's bed files).

I think my plan of attack here then will be to call peaks outside of R and then split up the bed file into individual chromosomes for the purposes of visualizing peaks (with Gviz) along with other tracks of interest. Does this workflow similar to anything anyone else has done? I really appreciate your responses.

Thanks so much!
danielecook is offline   Reply With Quote
Reply

Tags
bioconductor, formation, high throughput analysis, online, workshop

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 02:35 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2020, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO