Monthly Archives: November 2012
Here is an interesting concept for todays world. In TraceFinder we want to do things that are Quantitative and related functions, and to do it well. All to often in our world of science we want to all things to all data, but sometimes that doesn’t equal doing one thing well. Just an interesting bit I thought I’d pass along.
Need to Share Data – Here’s a Freebie from Microsoft
So we’ve had a couple of people want to send data across to help them look at methods and such, but they were limited by the amount of storage on dropbox or the security concerns of an outside source.
Here is a link to signup for Microsoft’s SkyDrive
This gets you 7 Gigs of storage right off the bat. No money required. The beauty of this is that it can sync with all your MS Office applications, which you can share or keep to yourself.
It also works as a location on your desktop so you don’t have to ftp large files, simply drag and drop.
The really cool part about this is, if I have a iPad or other mobile device and use the SkyDrive app, I can edit my stored documents real time in the cloud. I have access to them from anywhere at anytime, and anyone I have shared the document with can collaborate in real time.
For all those of you with large data sets this is a great spot to shared the files, and it works with your intergrated Windows security just as if it was a mapped drive.
The Master Method – A trinity of information
We will start a short series on the Master Method. So here we go.
Think of the Master Method as the supply truck that comes everytime you want to run an assay. It contains what you need to get things done. It’s alittle bulky and can be somewhat intimidating the first time you see it. Since it controls alot of the available power TF gives you, it has lots that can be,but doenst have to be configured and used.
For example, if a value is set to zero in the QAQC section it isnt turned on for use by the flagging system.
But lets start off with the basics.
The Master Method is a repository for the three things needed to gather and process a data file.
- The instrument method
- The peak processing parameters
- The selected reports to be produced.
With these three things we can submit a batch of samples and acquire, process and produce reports on the fly, after every sample. (but you need a sample list too)
The instrument method is a copy of a stored instrument method somewhere on your hard disk. It is associated with the processing method and can be changed according your your needs. But you always have the ability to go back to your orginal instrument method and copy over any changes. It’s a safety net.
It also insures that you only have to pick the Master Method when making a batch of samples. This keeps a batch consistant, run after run and technician to technician.
The processing parameters are set up for each peak and compound grouping. Once set up again it keeps things consistant.
Thirdly, is the report section. This seems to be a bit of confusion at times. By selecting which reports and what format you want them produced in, ahead of time. The data is always produced in the same way without missing a page. It’s also nessary for the reporting on the fly feature.
If you want to create reports after each injection, simply check the box on batch submission that says “Create Reports”. Then however you selected the report to be produced and one or multiple formats, it make them once the sample has been acquired and automatically processed. Except for Batch level reports, which are produced at the end of the batch sequence.
Follow the video below to see the process.
The video is from the soon to be release TF 3.0 but the workflow is the same.
If the video is blurry please click the cog wheel at the bottom of the panel and increase the video display resolution.
Extending a Calibration Curve “Creating a historical cal file” – (Answer to an “Ask a Guru” question
One aspect of the applied market labs is that a calibration curve does not have to run with each batch of samples.
Previously we showed you how to associate the calibration file from one batch to the one you are currently working with.
In this video, we show you how to create a historical curve by extending a calibration curve from another batch.
By selecting the curve and the extend calibration function, the points found in the current batch of samples will be added to the points from the previously acquired batch of samples. This in conjuntion with turning on and off groups of compounds can allow for the user to build calibration curves for large sets of samples and use a historical running average of calibrators.
The video is from the soon to be release TF 3.0 but the workflow is the same.
If the video is blurry please click the cog wheel at the bottom of the panel and increase the video display resolution.
Associating another batch’s Calibration Curve to the current batch – (Answer to a “Ask A Guru” Question)
I think that the questions below are pretty self explanitory of the feature in the video.
“What would you like to know?: Hi, Jamie,
Example: A Cal curve batch is acquiring.
Case 1: I submit a new batch of just patient samples and forget to extend the cal file to those samples. How do I extend the cal file to those samples w/o pulling them into the Cal Curve batch?
Case 2: I submit a new batch of just patient samples and extend the cal (with the wrong file) to those samples. How do I reprocess with the correct cal file?
Company: TFS
Area Of work: Clinical”
So if you have a batch in flight. You should pause the acquisition while you associate a calibration file. It just makes things cleaner if you have an older system with less RAM.
The video is out of the soon to be released TF 3.0, but if you are using and older version the functionality and menu selections are still the same. Dependant on the resolution setting the last frame may be overlayed with another.
If the video is blurry please click the cog wheel at the bottom of the panel and increase the video display resolution.