SEQanswers

SEQanswers (http://seqanswers.com/forums/index.php)
-   Bioinformatics (http://seqanswers.com/forums/forumdisplay.php?f=18)
-   -   New Resources for 1000 Genomes (http://seqanswers.com/forums/showthread.php?t=11221)

laura 05-09-2011 04:42 AM

New Resources for 1000 Genomes
 
New Resources for 1000 Genomes

General Info

As well as posting new announcements on the front page of http://www.1000genomes.org, we have both rss http://www.1000genomes.org/announcements/rss.xml and twitter http://twitter.com/1000genomes twitter

You can also subscribe to and announcements list we have setup. http://listserver.1000genomes.org/ma...o/1000announce [email protected]

We have started an FAQ http://www.1000genomes.org/faq to provide help as to where to find certain data sets which surround the 1000 genomes project and answers to other similar questions.

Data Search

You can now search both our website and our ftp site.

To search the main website you can use the search box which appears in the top right hand corner of each page on http://www.1000genomes.org.

Our ftp search is linked to from the top menu bar at the top of each page. For our ftp site we have an index on the ftp site called the ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/current.tree which is updated every night to reflect the contents of the ftp site. http://www.1000genomes.org/ftpsearch

The search itself will look for strings in the names of files and directories on the ftp site. This means the search can be used to find all vcf files or files associated with a particular release date or particular individual.

The search options will allow you to include md5s in the output and have the ftp paths point to either the NCBI or the EBI ftp site. Due to the volume of results which would be returned the search by default excludes fastq and bam files but you can return these results to the search. Currently the search will only return the first 1000 results due to the large volume of files on the ftp site.

Accessibility

Many of our releases contain very large files which can be challenging to download in their entirety. Both bam and vcf files have indexes which allow subsections to be downloaded using samtools or tabix respectively. There are descriptions of how to do this in our faq. We also now have a web based tool within our Ensembl browser which allows you to request a 10KB subsection of these files.

The Data Slicer (http://browser.1000genomes.org/tools.html) needs the URL of a indexed bam or vcf file and then will present a view of this file and a bam or vcf file to download. The data slicer can be accessed from the tool link at in the top right hand of all browser pages. It should work for any remotely accessible tabix indexed vcf file. It will work for any indexed bam over http but may only work for ftp bams within the EBI

You can also upload data from bam or vcf files from our ftp site. To do you you need to click on the mange your data link on the left hand menu of a page. This is best done from Location view. The section of the menu you need to click on is labeled attach remote file. Only bam files from the EBI ftp site will be visible but any remotely accessible vcf which is accompanied by a tabix index. Once your file is loaded you should be able to see the snps or aligned reads displayed and also share these links with others. This is described with screenshots in our Ensembl tutorial http://www.1000genomes.org/sites/100...l_20110506.doc

The browser also has a variant effect predictor tool which will take in up to 750 snps and indels in VCF format or an Ensembl specific format. This tool provides functional consequences with respect to the current gene and regulatory annotation which include SIFT and PolyPhen for any non synonymous snps. http://browser.1000genomes.org/tools.html. You can also download

If you have any questions about these new features or any other aspects of the project please email [email protected]

laura 06-16-2011 05:38 AM

We have also now added a public mysql instance for the ensembl databases which back our browser

You can find more details of this on http://www.1000genomes.org/public-en...mysql-instance

laura 09-01-2011 06:30 AM

Our browser has been updated to version 63 of the ensembl code and we have a new Variation Pattern Finder tool to go along slide it

http://browser.1000genomes.org/Homo_...riationsMapVCF
http://www.1000genomes.org/variation-pattern-finder

The Data Slicer now also allows you to subsample vcf files on sample and population

http://browser.1000genomes.org/Homo_...tSlice?db=core

laura 03-01-2012 02:17 AM

We now have a tutorial about using 1000 genomes data

http://www.1000genomes.org/announcem...ial-2012-03-01

Joann 03-29-2012 07:12 AM

Amazon puts it in the cloud
 
s3.amazonaws.com/1000genomes

http://aws.amazon.com/1000genomes/

Richard Finney 03-29-2012 09:46 AM

Amazon 1000 genomes?

From the amazon blog : "Researchers pay only for the additional AWS resources they need for further processing or analysis of the data.".

I'm guessing that's the "gotcha": you can view chunks for free (which you can anyway ... from other sources) but you get to pay for analyzing it.

I am wary of this "we'll keep the data and you can pay us" concept of "the cloud".

I think a better model would be: here's a shell login to your own VM and you can write or use your own python/java/c/bash programs to quickly access the 200TB.

I wish TCGA would do something like this but the data is locked down pretty hard. Maybe we'll get some open access disease samples as more Asian countries provide less encumbered data.

laura 07-02-2012 01:51 AM

http://www.1000genomes.org/announcem...lls-2012-07-02

A relatively complete set of variant and other files associated with our Phase 1 analysis are now available on the ftp site

gsgs 12-05-2012 06:46 AM

currently I estimate (wild guess) you have ~500 complete human genomes (1500GB)
at ~10fold coverage but they are scattered in lots of different formats and directories
and it would take me ~10 hours to figure out how to find the data and decompress and
convert it and another ~5 hours to just download the compressed data

I'd like to see the estimates of others

----------new estimates-------
they have all 1092 genomes(people,"samples") sequenced at 2-6 fold coverage
(which I assume means that they have lots of small segments (~500 nucleotides
per segment ?) from the genome and those may have many errors but overlap
the genome at ~2-6 fold at each position)
critical positions, those with expected mutations overlap more often (50-100 fold)
So they have a total of ~2e13 overlapping nucleotides

the data is in "vcf" files with complicated format, so I stay with my estimate
of ~10hours work to convert them into a workable format.

The data could be ~700MB only, the y-chr came in 2 files of 29MB compressed
-------------------------------------------------

laura 12-05-2012 06:50 AM

What would you like to do with the data, that will very much determine what the best way to approach the data set,

1000 genomes is a large data set with a variety of different data formats but to answer a single question you rarely need more than one sort of file

gsgs 12-05-2012 06:55 AM

I don't know yet.
Probably compare them, #mutations,distances
calculate the consensus,ancestor, plot the distances,
make my cloud-graphics(plot amino acid mutations over nucleotide mutations),
and mutation pictures(binary arrays,sequences over positions,pixel
at (x,y) iff x differs from consensus at position y) etc.

maybe this also works for "STR"s over normal mutations (these are new to me)

calculate recombination frequency
estimate mutation rates and what changes them
statistics of codon-usage
search for retrovirus

laura 12-05-2012 07:19 AM

I would strongly recommend starting with our recent paper and the analysis results associated with it

http://www.nature.com/nature/journal...ture11632.html

ftp://ftp.1000genomes.ebi.ac.uk/vol1...lysis_results/

That is a great starting point

gsgs 12-05-2012 07:37 AM

thanks.
10 pages the paper (pdf) ... printing...
2 pages the readme
that will keep me busy for a while ...
well, I'll probably only read and understand parts of it

I know, there is also the "hapmap" project, I managed to get
one of their tables into computer and analyze

laura 12-05-2012 09:38 AM

do feel free to email [email protected] if you have any questions

We do also have a recent set of slides which were presented in a tutorial at ASHG2012

http://www.1000genomes.org/announcem...012-2012-11-09

gsgs 12-05-2012 10:07 AM

no Y-chromosome ?

how would I pack the data ?
I want the 1092*36.7M SNPs in 23 binary files, one per chromosome.
Bit i in chromosome j in file(sample) k should be set, iff that SNP is present.
Then compressed with gzip.
23 files, ~50MB per file, I estimate

gsgs 12-05-2012 10:38 AM

wait, I have a better idea.
You compute the genetical distance between any pair of two samples, 1092^2 integers,4MB.
Just the number of set bits in the logical xor of the two 37M-bit-vectors.
Then you (circular) sort the 1092 samples so the sum of the distances between two neighbors
is minimal (traveling salesman problem, typically easy to solve for n=1092)
Then you compute the logical xors of any two adjacent samples, which presumably has lots of zeros.
1092 binary vectors of length 37M again, but this time with much better compression
via gzip or such because of the many zeros.
I can write you the programs for encoding and decoding, if you want.
Self-expanding executable, easy to use, all automatic.
The size of that file would be a measure of the genetical variability of your set of 1092 samples.

laura 12-05-2012 01:29 PM

As far as chrY

http://ftp.1000genomes.ebi.ac.uk/vol...notypes.vcf.gz

http://ftp.1000genomes.ebi.ac.uk/vol...notypes.vcf.gz

We provide all our variation data in VCF format which serves our needs quite well, if you have a better idea for your own needs then you should be able to get all the info you need from these files to do the conversion

Look at http://www.1000genomes.org/faq/how-d...your-vcf-files for streaming if you want to avoid downloading the entire data set

rama 12-05-2012 02:18 PM

vcf file of specific sample from 1000Genome data
 
Hi,

Can anyone help me how to access the vcf file of a specific sample from 1000Genome data. I found the consensus file at (ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/release) but couldn't find the individual samples.

I am trying to compare the variants found from our sequencing vs 1000Genome. if anyone has done similar analysis please let know I would to discuss wiht you offline.

Thanks in advance
Rama

laura 12-05-2012 10:10 PM

You should be able to get this info from our vcf files using a combination of tabix anc vcftools vcf-subset as described in our faq

http://www.1000genomes.org/faq/how-d...your-vcf-files

rama 12-06-2012 09:35 AM

Laura,

Thanks much for your reply. I am guessing this is the example for getting the vcf of sample.

tabix -h ftp://ftp-trace.ncbi.nih.gov/1000gen...804/ALL.2of4in... 17:1471000-1472000 | perl /nfs/1000g-work/G1K/work/bin/vcftools/perl/vcf-subset -c HG00098 | bgzip -c /tmp/HG00098.20100804.genotypes.vcf.gz

laura 12-06-2012 10:17 AM

That is correct


All times are GMT -8. The time now is 02:27 AM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2017, vBulletin Solutions, Inc.