SEQanswers

Go Back   SEQanswers > Bioinformatics > Bioinformatics



Similar Threads
Thread Thread Starter Forum Replies Last Post
PubMed: PathSeq: software to identify or discover microbes by deep sequencing of huma Newsbot! Literature Watch 0 09-09-2011 03:00 AM
Pathseq not in the cloud lorddoskias Bioinformatics 0 07-28-2011 03:25 AM
Seeking statistics on genomic data Fixee Bioinformatics 7 07-03-2011 11:44 PM
Seeking advice on setting up a breakpoint detection Baseless Bioinformatics 0 03-08-2010 12:54 PM
Seeking bioinformatics job nextGenJob Academic/Non-Profit Jobs 0 01-19-2010 11:44 AM

Reply
 
Thread Tools
Old 02-16-2012, 09:35 AM   #41
pcs_murali
Member
 
Location: Boston

Join Date: May 2010
Posts: 26
Default

Hi Rts,

thank you very much for posting. Please let me know if you can run it without any problem.

Chandra
pcs_murali is offline   Reply With Quote
Old 03-14-2012, 05:11 PM   #42
NKAkers
Member
 
Location: New York, NY

Join Date: Sep 2011
Posts: 26
Default

Hi Chandra,

I've had success with Pathseq so far, but I just hit a snag on a sample. I let it run over 40 hours, however it never progressed beyond Job 2. I'm wondering if my log file can provide any insight on what happened.

Thank you!
Code:
rmr: cannot remove config: No such file or directory.
rmr: cannot remove s3config: No such file or directory.
rmr: cannot remove load: No such file or directory.
Master data_loader
12/03/10 20:56:46 WARN streaming.StreamJob: -jobconf option is deprecated, please use -D instead.
packageJobJar: [/root/mapper_data_compsub.py, /mnt/hadoop/hadoop-unjar449601472672210086/] [] /tmp/streamjob6443148539614277804.jar tmpDir=null
12/03/10 20:56:47 INFO mapred.FileInputFormat: Total input paths to process : 20
12/03/10 20:56:47 INFO streaming.StreamJob: getLocalDirs(): [/mnt/hadoop/mapred/local]
12/03/10 20:56:47 INFO streaming.StreamJob: Running job: job_201203102044_0001
12/03/10 20:56:47 INFO streaming.StreamJob: To kill this job, run:
12/03/10 20:56:47 INFO streaming.StreamJob: /usr/local/hadoop-0.19.0/bin/../bin/hadoop job  -Dmapred.job.tracker=hdfs://ip-10-34-46-200.ec2.internal:50002 -kill job_201203102044_0001
12/03/10 20:56:48 INFO streaming.StreamJob: Tracking URL: http://ip-10-34-46-200.ec2.internal:...203102044_0001
12/03/10 20:56:49 INFO streaming.StreamJob:  map 0%  reduce 0%
12/03/10 20:57:02 INFO streaming.StreamJob:  map 10%  reduce 0%
12/03/10 20:57:03 INFO streaming.StreamJob:  map 30%  reduce 0%
12/03/10 20:57:04 INFO streaming.StreamJob:  map 45%  reduce 0%
12/03/10 20:57:06 INFO streaming.StreamJob:  map 55%  reduce 0%
12/03/10 20:57:07 INFO streaming.StreamJob:  map 100%  reduce 0%
12/03/10 22:05:24 INFO streaming.StreamJob: Job complete: job_201203102044_0001
12/03/10 22:05:24 INFO streaming.StreamJob: Output: load

real	68m38.462s
user	0m3.772s
sys	0m0.738s
Master loader completed
ERROR: Bucket 'ami-kippsample03job-stat' does not exist
Bucket 's3://ami-kippsample03job-stat/' removed
Bucket 's3://ami-kippsample03job-stat/' created
ERROR: Bucket 'ami-kippsample03job-output' does not exist
Bucket 's3://ami-kippsample03job-output/' removed
Bucket 's3://ami-kippsample03job-output/' created
File s3://ami-kippsample03reads/input1.local saved as '/usr/local/hadoop-0.19.0/input1.local' (97 bytes in 0.1 seconds, 1678.37 B/s)
File s3://ami-kippsample03reads/input10.local saved as '/usr/local/hadoop-0.19.0/input10.local' (98 bytes in 0.1 seconds, 1848.92 B/s)
File s3://ami-kippsample03reads/input11.local saved as '/usr/local/hadoop-0.19.0/input11.local' (98 bytes in 0.0 seconds, 2.34 kB/s)
File s3://ami-kippsample03reads/input12.local saved as '/usr/local/hadoop-0.19.0/input12.local' (98 bytes in 0.1 seconds, 1576.68 B/s)
File s3://ami-kippsample03reads/input13.local saved as '/usr/local/hadoop-0.19.0/input13.local' (98 bytes in 0.1 seconds, 741.02 B/s)
File s3://ami-kippsample03reads/input14.local saved as '/usr/local/hadoop-0.19.0/input14.local' (98 bytes in 0.1 seconds, 1012.16 B/s)
File s3://ami-kippsample03reads/input15.local saved as '/usr/local/hadoop-0.19.0/input15.local' (98 bytes in 0.1 seconds, 1903.09 B/s)
File s3://ami-kippsample03reads/input16.local saved as '/usr/local/hadoop-0.19.0/input16.local' (98 bytes in 0.0 seconds, 1989.20 B/s)
File s3://ami-kippsample03reads/input17.local saved as '/usr/local/hadoop-0.19.0/input17.local' (98 bytes in 0.0 seconds, 1997.18 B/s)
File s3://ami-kippsample03reads/input18.local saved as '/usr/local/hadoop-0.19.0/input18.local' (98 bytes in 0.0 seconds, 2.15 kB/s)
File s3://ami-kippsample03reads/input19.local saved as '/usr/local/hadoop-0.19.0/input19.local' (98 bytes in 0.0 seconds, 2.40 kB/s)
File s3://ami-kippsample03reads/input2.local saved as '/usr/local/hadoop-0.19.0/input2.local' (97 bytes in 0.1 seconds, 1263.51 B/s)
File s3://ami-kippsample03reads/input20.local saved as '/usr/local/hadoop-0.19.0/input20.local' (98 bytes in 0.1 seconds, 1340.78 B/s)
File s3://ami-kippsample03reads/input21.local saved as '/usr/local/hadoop-0.19.0/input21.local' (98 bytes in 0.1 seconds, 1857.47 B/s)
File s3://ami-kippsample03reads/input22.local saved as '/usr/local/hadoop-0.19.0/input22.local' (98 bytes in 0.1 seconds, 1100.16 B/s)
File s3://ami-kippsample03reads/input23.local saved as '/usr/local/hadoop-0.19.0/input23.local' (98 bytes in 0.1 seconds, 1780.13 B/s)
File s3://ami-kippsample03reads/input24.local saved as '/usr/local/hadoop-0.19.0/input24.local' (98 bytes in 0.1 seconds, 1927.05 B/s)
File s3://ami-kippsample03reads/input25.local saved as '/usr/local/hadoop-0.19.0/input25.local' (98 bytes in 0.1 seconds, 1430.26 B/s)
File s3://ami-kippsample03reads/input26.local saved as '/usr/local/hadoop-0.19.0/input26.local' (98 bytes in 0.1 seconds, 1714.27 B/s)
File s3://ami-kippsample03reads/input27.local saved as '/usr/local/hadoop-0.19.0/input27.local' (98 bytes in 0.0 seconds, 2.18 kB/s)
File s3://ami-kippsample03reads/input28.local saved as '/usr/local/hadoop-0.19.0/input28.local' (98 bytes in 0.0 seconds, 2.59 kB/s)
File s3://ami-kippsample03reads/input29.local saved as '/usr/local/hadoop-0.19.0/input29.local' (98 bytes in 0.0 seconds, 2.21 kB/s)
File s3://ami-kippsample03reads/input3.local saved as '/usr/local/hadoop-0.19.0/input3.local' (97 bytes in 0.0 seconds, 2.20 kB/s)
File s3://ami-kippsample03reads/input30.local saved as '/usr/local/hadoop-0.19.0/input30.local' (98 bytes in 0.1 seconds, 1836.96 B/s)
File s3://ami-kippsample03reads/input31.local saved as '/usr/local/hadoop-0.19.0/input31.local' (98 bytes in 0.1 seconds, 1788.36 B/s)
File s3://ami-kippsample03reads/input32.local saved as '/usr/local/hadoop-0.19.0/input32.local' (98 bytes in 0.0 seconds, 2.07 kB/s)
File s3://ami-kippsample03reads/input33.local saved as '/usr/local/hadoop-0.19.0/input33.local' (98 bytes in 0.1 seconds, 793.01 B/s)
File s3://ami-kippsample03reads/input34.local saved as '/usr/local/hadoop-0.19.0/input34.local' (98 bytes in 0.0 seconds, 2.05 kB/s)
File s3://ami-kippsample03reads/input35.local saved as '/usr/local/hadoop-0.19.0/input35.local' (98 bytes in 0.0 seconds, 2.04 kB/s)
File s3://ami-kippsample03reads/input36.local saved as '/usr/local/hadoop-0.19.0/input36.local' (98 bytes in 0.0 seconds, 2004.88 B/s)
File s3://ami-kippsample03reads/input37.local saved as '/usr/local/hadoop-0.19.0/input37.local' (98 bytes in 0.1 seconds, 1686.98 B/s)
File s3://ami-kippsample03reads/input38.local saved as '/usr/local/hadoop-0.19.0/input38.local' (98 bytes in 0.1 seconds, 1425.54 B/s)
File s3://ami-kippsample03reads/input39.local saved as '/usr/local/hadoop-0.19.0/input39.local' (98 bytes in 0.0 seconds, 3.35 kB/s)
File s3://ami-kippsample03reads/input4.local saved as '/usr/local/hadoop-0.19.0/input4.local' (97 bytes in 0.0 seconds, 2.06 kB/s)
File s3://ami-kippsample03reads/input40.local saved as '/usr/local/hadoop-0.19.0/input40.local' (98 bytes in 0.0 seconds, 2.07 kB/s)
File s3://ami-kippsample03reads/input41.local saved as '/usr/local/hadoop-0.19.0/input41.local' (98 bytes in 0.0 seconds, 1960.43 B/s)
File s3://ami-kippsample03reads/input42.local saved as '/usr/local/hadoop-0.19.0/input42.local' (98 bytes in 0.0 seconds, 2.09 kB/s)
File s3://ami-kippsample03reads/input43.local saved as '/usr/local/hadoop-0.19.0/input43.local' (98 bytes in 0.1 seconds, 1370.40 B/s)
File s3://ami-kippsample03reads/input44.local saved as '/usr/local/hadoop-0.19.0/input44.local' (98 bytes in 0.3 seconds, 358.54 B/s)
File s3://ami-kippsample03reads/input45.local saved as '/usr/local/hadoop-0.19.0/input45.local' (98 bytes in 0.0 seconds, 2.85 kB/s)
File s3://ami-kippsample03reads/input46.local saved as '/usr/local/hadoop-0.19.0/input46.local' (98 bytes in 0.0 seconds, 2013.97 B/s)
File s3://ami-kippsample03reads/input47.local saved as '/usr/local/hadoop-0.19.0/input47.local' (98 bytes in 0.0 seconds, 3.13 kB/s)
File s3://ami-kippsample03reads/input48.local saved as '/usr/local/hadoop-0.19.0/input48.local' (98 bytes in 0.1 seconds, 1585.68 B/s)
File s3://ami-kippsample03reads/input49.local saved as '/usr/local/hadoop-0.19.0/input49.local' (98 bytes in 0.1 seconds, 1626.69 B/s)
File s3://ami-kippsample03reads/input5.local saved as '/usr/local/hadoop-0.19.0/input5.local' (97 bytes in 0.0 seconds, 2.61 kB/s)
File s3://ami-kippsample03reads/input50.local saved as '/usr/local/hadoop-0.19.0/input50.local' (98 bytes in 0.0 seconds, 2.09 kB/s)
File s3://ami-kippsample03reads/input51.local saved as '/usr/local/hadoop-0.19.0/input51.local' (98 bytes in 0.1 seconds, 1158.50 B/s)
File s3://ami-kippsample03reads/input52.local saved as '/usr/local/hadoop-0.19.0/input52.local' (98 bytes in 0.0 seconds, 2.41 kB/s)
File s3://ami-kippsample03reads/input53.local saved as '/usr/local/hadoop-0.19.0/input53.local' (98 bytes in 0.0 seconds, 2018.21 B/s)
File s3://ami-kippsample03reads/input54.local saved as '/usr/local/hadoop-0.19.0/input54.local' (98 bytes in 0.1 seconds, 1402.40 B/s)
File s3://ami-kippsample03reads/input55.local saved as '/usr/local/hadoop-0.19.0/input55.local' (98 bytes in 0.0 seconds, 2.10 kB/s)
File s3://ami-kippsample03reads/input56.local saved as '/usr/local/hadoop-0.19.0/input56.local' (98 bytes in 0.0 seconds, 2.37 kB/s)
File s3://ami-kippsample03reads/input57.local saved as '/usr/local/hadoop-0.19.0/input57.local' (98 bytes in 0.0 seconds, 2.38 kB/s)
File s3://ami-kippsample03reads/input58.local saved as '/usr/local/hadoop-0.19.0/input58.local' (98 bytes in 0.1 seconds, 1810.96 B/s)
File s3://ami-kippsample03reads/input59.local saved as '/usr/local/hadoop-0.19.0/input59.local' (98 bytes in 0.0 seconds, 2.03 kB/s)
File s3://ami-kippsample03reads/input6.local saved as '/usr/local/hadoop-0.19.0/input6.local' (97 bytes in 0.0 seconds, 2.09 kB/s)
File s3://ami-kippsample03reads/input60.local saved as '/usr/local/hadoop-0.19.0/input60.local' (98 bytes in 0.0 seconds, 2.50 kB/s)
File s3://ami-kippsample03reads/input7.local saved as '/usr/local/hadoop-0.19.0/input7.local' (97 bytes in 0.0 seconds, 2.15 kB/s)
File s3://ami-kippsample03reads/input8.local saved as '/usr/local/hadoop-0.19.0/input8.local' (97 bytes in 0.1 seconds, 1397.15 B/s)
File s3://ami-kippsample03reads/input9.local saved as '/usr/local/hadoop-0.19.0/input9.local' (97 bytes in 0.0 seconds, 2.47 kB/s)
rmr: cannot remove test: No such file or directory.
rmr: cannot remove maq: No such file or directory.
Maq alignments + Duplicate remover
12/03/10 22:05:39 WARN streaming.StreamJob: -jobconf option is deprecated, please use -D instead.
packageJobJar: [/root/mapper_maqalignment.py, /root/Sam2Fastq.java, /root/FQone2Fastq.java, /root/Fastq2FQone.java, /root/removeduplicates_new.java, /root/MAQunmapped2FQone.java, /root/MAQunmapped2fastq.java, /mnt/hadoop/hadoop-unjar7647762000067869068/] [] /tmp/streamjob8505619080343769434.jar tmpDir=null
12/03/10 22:05:40 INFO mapred.FileInputFormat: Total input paths to process : 60
12/03/10 22:05:40 INFO streaming.StreamJob: getLocalDirs(): [/mnt/hadoop/mapred/local]
12/03/10 22:05:40 INFO streaming.StreamJob: Running job: job_201203102044_0002
12/03/10 22:05:40 INFO streaming.StreamJob: To kill this job, run:
12/03/10 22:05:40 INFO streaming.StreamJob: /usr/local/hadoop-0.19.0/bin/../bin/hadoop job  -Dmapred.job.tracker=hdfs://ip-10-34-46-200.ec2.internal:50002 -kill job_201203102044_0002
12/03/10 22:05:40 INFO streaming.StreamJob: Tracking URL: http://ip-10-34-46-200.ec2.internal:...203102044_0002
12/03/10 22:05:41 INFO streaming.StreamJob:  map 0%  reduce 0%
12/03/10 22:06:00 INFO streaming.StreamJob:  map 3%  reduce 0%
12/03/10 22:06:01 INFO streaming.StreamJob:  map 17%  reduce 0%
12/03/10 22:06:02 INFO streaming.StreamJob:  map 25%  reduce 0%
12/03/10 22:06:04 INFO streaming.StreamJob:  map 28%  reduce 0%
12/03/10 22:06:05 INFO streaming.StreamJob:  map 33%  reduce 0%
12/03/10 22:06:06 INFO streaming.StreamJob:  map 47%  reduce 0%
12/03/10 22:06:07 INFO streaming.StreamJob:  map 55%  reduce 0%
12/03/10 22:06:08 INFO streaming.StreamJob:  map 62%  reduce 0%
12/03/10 22:06:09 INFO streaming.StreamJob:  map 65%  reduce 0%
12/03/10 22:06:10 INFO streaming.StreamJob:  map 70%  reduce 0%
12/03/10 22:06:11 INFO streaming.StreamJob:  map 83%  reduce 0%
12/03/10 22:06:12 INFO streaming.StreamJob:  map 92%  reduce 0%
12/03/10 22:06:14 INFO streaming.StreamJob:  map 95%  reduce 0%
12/03/10 22:06:15 INFO streaming.StreamJob:  map 98%  reduce 0%
12/03/10 22:06:16 INFO streaming.StreamJob:  map 100%  reduce 0%
NKAkers is offline   Reply With Quote
Old 03-15-2012, 06:00 PM   #43
pcs_murali
Member
 
Location: Boston

Join Date: May 2010
Posts: 26
Default

Hi NKAkers,

Thanks for using Pathseq.

Mostly, this sample may have lots of microbial sequences. This may leads to long runs.

One more thing, recently AWS have some issues with nodes.

Please let me know what is the source of the sample?

Thanks
Chandra
pcs_murali is offline   Reply With Quote
Old 03-15-2012, 06:33 PM   #44
NKAkers
Member
 
Location: New York, NY

Join Date: Sep 2011
Posts: 26
Default

Hi Chandra,

Thanks for making Pathseq. The sample source is human, RNA-seq data. 155million reads originally, with 55 million passing quality filters. I was expecting the vast majority of reads to be human, and I've had success previously with similar datasets, always <30 hr run times.

My plan is to try a different dataset in the next few days, if that works I'll suspect it was something in that particular data set or a one time glitch.

Thanks!
NKAkers is offline   Reply With Quote
Old 03-16-2012, 07:48 AM   #45
pcs_murali
Member
 
Location: Boston

Join Date: May 2010
Posts: 26
Default

Hi NKAkers,

Please post me with latest updates in your end.

Also, what is the source of the tissue you sequenced?

Thanks
Chandra
pcs_murali is offline   Reply With Quote
Old 06-21-2012, 10:08 AM   #46
pravee1216
Member
 
Location: India

Join Date: Aug 2010
Posts: 35
Default Installation on cluster

Hi Chandra,

One quick question. Do you provide an installer to setup and run PathSeq on a local cluster/server? It would be great if we have one. BWA based alignment against other genomic databases can be performed even on cluster with in an hour. Do you have any plan to release such a version, it will be a great help to the research community

Thanks and look forward to your comments

Best

Praveen.
pravee1216 is offline   Reply With Quote
Old 06-21-2012, 06:03 PM   #47
pcs_murali
Member
 
Location: Boston

Join Date: May 2010
Posts: 26
Default

Hi Praveen,

I just made the Pathseq_BWA.

I released it for beta tester.

This weekend, i will upload the latest one.

Thanks
Chandra



Quote:
Originally Posted by pravee1216 View Post
Hi Chandra,

One quick question. Do you provide an installer to setup and run PathSeq on a local cluster/server? It would be great if we have one. BWA based alignment against other genomic databases can be performed even on cluster with in an hour. Do you have any plan to release such a version, it will be a great help to the research community

Thanks and look forward to your comments

Best

Praveen.
pcs_murali is offline   Reply With Quote
Old 06-24-2012, 10:24 AM   #48
pravee1216
Member
 
Location: India

Join Date: Aug 2010
Posts: 35
Default

Sounds good. When would it be available to us? Is this version capable to run on a server system?

Thanks for the initiative of building this version

Praveen.
pravee1216 is offline   Reply With Quote
Old 08-30-2012, 08:59 AM   #49
DineshCyanam
Compendia Bio
 
Location: Ann Arbor

Join Date: Oct 2010
Posts: 35
Default

Hi Chandra,
Any update on Pathseq_BWA?

-Dinesh
DineshCyanam is offline   Reply With Quote
Old 09-11-2012, 10:30 AM   #50
pcs_murali
Member
 
Location: Boston

Join Date: May 2010
Posts: 26
Default

Hi Pathseq users,

We released new version Pathseq version 1.2 that has following updates:

http://www.broadinstitute.org/softwa...Downloads.html

Update:
1. BWA aligner replaces the MAQ aligner
2. S3CMD has been updated
3. Added new datatype called DATATYPE (WGS/RNASEQ). This helps the pipeline to select the databases (reference genomes)
4. Hadoop framework has been updated to 1.0.3.

Please kindly send me your comments and suggestions.

Thanks
Chandra
pcs_murali is offline   Reply With Quote
Old 09-11-2012, 10:34 AM   #51
pravee1216
Member
 
Location: India

Join Date: Aug 2010
Posts: 35
Default

Hi Chandra,

Nice to see the update. Does this version support or run on a cluster system? Do you have any plan to release it?

Thanks

Praveen.
pravee1216 is offline   Reply With Quote
Old 09-11-2012, 01:27 PM   #52
pcs_murali
Member
 
Location: Boston

Join Date: May 2010
Posts: 26
Default

Hi Praveen,

What kind of cluster system you have?

I am working on several other options, which will be released soon,

Thanks
Chandra
Quote:
Originally Posted by pravee1216 View Post
Hi Chandra,

Nice to see the update. Does this version support or run on a cluster system? Do you have any plan to release it?

Thanks

Praveen.
pcs_murali is offline   Reply With Quote
Old 09-12-2012, 07:31 AM   #53
DineshCyanam
Compendia Bio
 
Location: Ann Arbor

Join Date: Oct 2010
Posts: 35
Default

Thanks for the update, Chandra. Will try it out and get back to you...

- Dinesh

Quote:
Originally Posted by pcs_murali View Post
Hi Pathseq users,

We released new version Pathseq version 1.2 that has following updates:

http://www.broadinstitute.org/softwa...Downloads.html

Update:
1. BWA aligner replaces the MAQ aligner
2. S3CMD has been updated
3. Added new datatype called DATATYPE (WGS/RNASEQ). This helps the pipeline to select the databases (reference genomes)
4. Hadoop framework has been updated to 1.0.3.

Please kindly send me your comments and suggestions.

Thanks
Chandra
DineshCyanam is offline   Reply With Quote
Old 09-12-2012, 12:52 PM   #54
DineshCyanam
Compendia Bio
 
Location: Ann Arbor

Join Date: Oct 2010
Posts: 35
Default

Looks like there is a bug in Preprocessed_Reads.com file. Line 18 says exit and the script quits.
After commenting the line, the script runs fine.
Code:
@ n_para=$#

# Reading environmental variables for running job from cluster.config and job.config
set para = `awk -f readconfig.awk T0=cluster.config T01=job.config < .empty.lst`
set fq1 = $1

echo $para[15]

exit

echo $fq1 > .tmp
set namefile = `awk '{ns=split($1, x, "/"); print x[ns];}' .tmp`

Last edited by DineshCyanam; 09-12-2012 at 01:02 PM.
DineshCyanam is offline   Reply With Quote
Old 09-13-2012, 06:17 AM   #55
DineshCyanam
Compendia Bio
 
Location: Ann Arbor

Join Date: Oct 2010
Posts: 35
Default

Alright... So I ran the new PathSeq_BWA version and here are the results. I had ~65 million filtered reads and it took ~8 hours to finish and produced 177308 unmapped reads. This was run on 19 nodes (+1 master node) as a large instance.

More when I'm done analyzing the results.

- Dinesh

DineshCyanam is offline   Reply With Quote
Old 12-04-2012, 01:21 AM   #56
zaki
Member
 
Location: Malaysia

Join Date: Dec 2012
Posts: 15
Default

Hi people of the thread...

Does PathSeq still requires the use of AWS for installation?

I just graduated from uni and would like to explore the use of PathSeq as it sounds fun! but if it still requires AWS and credit card then I might not be able to do so

However I do have access to unix cluster, but if the installation instruction from http://www.broadinstitute.org/softwa...tallation.html still holds true, then I don't think I have the resource to use AWS

On a side not, how much would AWS charge for a PathSeq run?

Cheers
zaki is offline   Reply With Quote
Old 03-04-2013, 11:28 AM   #57
pcs_murali
Member
 
Location: Boston

Join Date: May 2010
Posts: 26
Default

Hi Zaki,

PathSeq that is downloaded from the URL is configured for the AWS. However, technically, it can run local hadoop cluster (which may need some changes to scripts).

AWS charge for PathSeq entirely depends on your data. We don't have any kind of affiliation to AWS.

Please let me know if you have unix cluster that is configured for hadoop?

Thanks
Chandra
pcs_murali is offline   Reply With Quote
Reply

Tags
cloud computing

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off




All times are GMT -8. The time now is 02:57 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2021, vBulletin Solutions, Inc.
Single Sign On provided by vBSSO