Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Bioconductor/R workshop for high-throughput data analysis (web version)

    Hello,

    Boasting a large collection of software for the analysis of biological data, Bioconductor and R are invaluable tools for data management, pipeline production, and analysis.
    To help biologists tackle the challenge of understanding and learning how to use these tools, BSC is offering an online workshop on the analysis of high-throughput data, with an emphasis on microarray and next generation sequencing technologies.

    To learn more, visit us at



    The BSC team

  • #2
    Hi -

    I've got a question regarding using Bioconductor. I've been struggling with Bioconductor for the past week - I was under the impression that Bioconductor could be used for 'high-throughput' analysis on large files (500MB+) without any trouble on my laptop.

    Specifically - I am trying to do ChIP-seq analysis on a genome wide scale. Unfortunately, I haven't even been able to get the files to load. R tries to load them in their entirely into RAM - I guess I thought it would work with the files without necessarily loading them into memory.

    What strategies do people use to work with large files in Bioconductor? Is there some major thing I am missing here?

    Thanks!
    Dan

    Comment


    • #3
      R does have the unfortunate habit of trying to load large files into RAM. If you're trying to load the raw sequences or even the full BAM file then you're going to have problems. Process all of the reads outside of R (also, call peaks there). Just load the peaks into R as this file should be much more manageable for you.

      BTW, in the future you might get a little better feedback if you stated exactly what's not working well. What are you trying to load and how big is it? How much RAM do you have? You won't get far if you don't give people the information they need to help you.
      Last edited by dpryan; 10-12-2013, 03:21 PM.

      Comment


      • #4
        Hi, Personnally, with a I7 intel + 8 Gig in Memory, I'm able to perform analysis for large microarray data.
        But, i think that , with the high throughput data like NGS data, you need to server with many nodes + many Gig for the ram. It's not possible to process them with your laptop. the HT data needs High level tools.
        Hence, amazon server is one solution, or in your instistution ,you have, may be, a compute "grapp" , composed of many nodes and RAM.

        My strategie, I run my code on a server for the greedy step. like algnment .., I save the R object, I load on my personnal computer to go on my analysis. Normally, all analysis could be processed on a server but with the task ordonnancor ( i'm not sure of this terms in english), you have some constraints

        When you have a big data, think a big computer. you loss your times to process your data on your computer. Moreover, the time machine is not expensive ( if you compared to the price to generate the HT data )

        Comment


        • #5
          Thanks for the tips

          I appreciate the feedback. We do have a supercomputer here I can make use of, and I do have some experience with Amazon in the cloud.

          Actually - the data I am working with is already aligned (it's bed files).

          I think my plan of attack here then will be to call peaks outside of R and then split up the bed file into individual chromosomes for the purposes of visualizing peaks (with Gviz) along with other tracks of interest. Does this workflow similar to anything anyone else has done? I really appreciate your responses.

          Thanks so much!

          Comment

          Latest Articles

          Collapse

          • seqadmin
            Current Approaches to Protein Sequencing
            by seqadmin


            Proteins are often described as the workhorses of the cell, and identifying their sequences is key to understanding their role in biological processes and disease. Currently, the most common technique used to determine protein sequences is mass spectrometry. While still a valuable tool, mass spectrometry faces several limitations and requires a highly experienced scientist familiar with the equipment to operate it. Additionally, other proteomic methods, like affinity assays, are constrained...
            04-04-2024, 04:25 PM
          • seqadmin
            Strategies for Sequencing Challenging Samples
            by seqadmin


            Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
            03-22-2024, 06:39 AM

          ad_right_rmr

          Collapse

          News

          Collapse

          Topics Statistics Last Post
          Started by seqadmin, 04-11-2024, 12:08 PM
          0 responses
          25 views
          0 likes
          Last Post seqadmin  
          Started by seqadmin, 04-10-2024, 10:19 PM
          0 responses
          28 views
          0 likes
          Last Post seqadmin  
          Started by seqadmin, 04-10-2024, 09:21 AM
          0 responses
          24 views
          0 likes
          Last Post seqadmin  
          Started by seqadmin, 04-04-2024, 09:00 AM
          0 responses
          52 views
          0 likes
          Last Post seqadmin  
          Working...
          X