Hi all,
I'm trying to flow sort a fixed number of cells (or at least as close as I possibly can), get the total RNA material, then run a gene-specific RNA-seq protocol to get a control for what we can expect from our clinical samples, which should have roughly similar range of cells.
***
Technical details: I am using the Nucleospin XS RNA kit from MN. I sort cells into the kits lysis buffer (it has guanidinium thiocyanate + TCEP). I calibrate the sort so that cells are deposited as directly into the lysis buffer as possible. I also use a larger volume of lysis buffer to offset dilution from the droplets in the sort.
After sorting, I flash freeze the samples on dry ice, then store in -80, until I have time to process, which is usually within 3-4 days.
end of technical details
***
After isolation, I then try to run some of the samples through Advance Analytic chips for pico level RNA quantification (we submit to a core for RNA chips), and here's where I'm running into some troubles.
I've attached two of my traces below (1 and 2). They were from two duplicates (RNA from 5,000 cells eluted in 10uL of water), ran on two separate AA run. As you can see, they look very similar (including the RFU intensity of the marker (LM)), but the concentration estimation is very different. I talked to the core about this problem, and apparently the difference is due to the difference in how the ladder ran, which then changes the estimation of the concentration of the marker.
I then compared the run to another pico RNA run I had from over a year ago (trace 3 attached here), and again the estimation is very different. I followed up with the core about it. Apparently, the marker for the machine has changed over the year (from 20bp to 15bp, and to a higher concentration). But the ladder, which is used to determine the overall concentration, hasn't change.
That just doesn't seem to make sense to me. It's apparent that the new 15bp marker is more concentrated (much more intense signal/RFU), but for some reason it doesn't seem to be reflected in the marker's estimated concentration (at least for one of the traces). I contend that someone made a mistake here in analyzing the data. Am I right? Or is this just normal with running these sort of fragment analysis?
My follow up question is, how much should I care about the concentration estimated from these analysis? My 28S/18S ratio seems good, and the baseline seems clean. I initially really wanted a quantitative way to make sure I'm getting good RNA extraction, but now I'm not sure if that's possible. Upon looking through more traces, it seems like peak intensity, even of just the marker, could jump around quite a bit. I don't have a lot of experience quantifying low RNA amount, so I'm not sure what to expect.
I'm trying to flow sort a fixed number of cells (or at least as close as I possibly can), get the total RNA material, then run a gene-specific RNA-seq protocol to get a control for what we can expect from our clinical samples, which should have roughly similar range of cells.
***
Technical details: I am using the Nucleospin XS RNA kit from MN. I sort cells into the kits lysis buffer (it has guanidinium thiocyanate + TCEP). I calibrate the sort so that cells are deposited as directly into the lysis buffer as possible. I also use a larger volume of lysis buffer to offset dilution from the droplets in the sort.
After sorting, I flash freeze the samples on dry ice, then store in -80, until I have time to process, which is usually within 3-4 days.
end of technical details
***
After isolation, I then try to run some of the samples through Advance Analytic chips for pico level RNA quantification (we submit to a core for RNA chips), and here's where I'm running into some troubles.
I've attached two of my traces below (1 and 2). They were from two duplicates (RNA from 5,000 cells eluted in 10uL of water), ran on two separate AA run. As you can see, they look very similar (including the RFU intensity of the marker (LM)), but the concentration estimation is very different. I talked to the core about this problem, and apparently the difference is due to the difference in how the ladder ran, which then changes the estimation of the concentration of the marker.
I then compared the run to another pico RNA run I had from over a year ago (trace 3 attached here), and again the estimation is very different. I followed up with the core about it. Apparently, the marker for the machine has changed over the year (from 20bp to 15bp, and to a higher concentration). But the ladder, which is used to determine the overall concentration, hasn't change.
That just doesn't seem to make sense to me. It's apparent that the new 15bp marker is more concentrated (much more intense signal/RFU), but for some reason it doesn't seem to be reflected in the marker's estimated concentration (at least for one of the traces). I contend that someone made a mistake here in analyzing the data. Am I right? Or is this just normal with running these sort of fragment analysis?
My follow up question is, how much should I care about the concentration estimated from these analysis? My 28S/18S ratio seems good, and the baseline seems clean. I initially really wanted a quantitative way to make sure I'm getting good RNA extraction, but now I'm not sure if that's possible. Upon looking through more traces, it seems like peak intensity, even of just the marker, could jump around quite a bit. I don't have a lot of experience quantifying low RNA amount, so I'm not sure what to expect.