Post List

Computer Science / Engineering posts

(Modify Search »)

  • April 10, 2012
  • 11:02 AM
  • 785 views

Google Searches Give Away a Country's GDP

by Elizabeth Preston in Inkfish





Anytime we travel through the Internet we leave piles of data behind us, like Pigpen shedding his cloud of filth. It's too bad if you're concerned about privacy. But if you're a mathematician, that heap of dirt is more like a goldmine, and digging into it can turn up unexpected nuggets. A study of worldwide Google searches, for one thing, reveals that people in wealthier nations think less about the past.

Google collects data on what search terms people around the world are using. Researchers who want to use this data to compare search terms across different countries are usually restricted to places that share a language. But the authors of a new paper in Scientific Reports got around that problem by looking only at numerical search terms.

"We realized...that years represented in Arabic numerals are an almost universal written representation," author Helen Susannah Moat wrote in an email. By looking only at search terms such as 2011 or 2010, she and her coauthors could compare search data from nearly the whole globe.



"It seemed a logical first step to consider to what extent Internet users were searching for dates in the future compared to dates in the past," Moat says. For example, looking at data from 2010, the researchers compared searches including 2011 to those including 2009. The ratio of forward-looking to backward-looking searches in each country became its "future orientation" score.


The authors culled data from 45 countries with substantial Internet-using populations. Then they sorted those 45 countries by GDP ("also the most obvious variable," Moat says). A clear pattern popped out of the numbers: Countries with lower GDPs had lower future orientation scores, and vice versa. People in poorer countries did more searches concerning the previous year; those in wealthier nations searched more for the next year. The trend was strong, and it held up in data from 2009 and 2008 as well.



Countries with the lowest future orientation scores included Pakistan and Vietnam, where previous-year searches outnumbered next-year searches by a factor of three or four to one. In the United States and Canada, countries toward the higher end in future orientation, searches for the last year and the next year were roughly equal. Switzerland, Australia, and the United Kingdom were among the most forward-looking countries of all.


"One of the possible interpretations of our results," Moat writes, "is that a focus on the future supports economic success." In other words, populations that are more forward-thinking become wealthier. This up-by-the-bootstraps explanation doesn't seem like the simplest one, though.

Another possibility is that populations with more money and leisure time can afford to spend it thinking about the future. A person in a wealthier nation might search online for next year's concert tickets, dates of work holidays, or when the new iPad is coming out. Someone without disposable income, though, might not have many such events to look forward to.

Here's some good news for people in all nations: Google Trends is available online for aspiring data analysts to play with. Panning for gold in its graphs won't cost anything except your free time.


Preis, T., Moat, H., Stanley, H., & Bishop, S. (2012). Quantifying the Advantage of Looking Forward Scientific Reports, 2 DOI: 10.1038/srep00350

... Read more »

Preis, T., Moat, H., Stanley, H., & Bishop, S. (2012) Quantifying the Advantage of Looking Forward. Scientific Reports. DOI: 10.1038/srep00350  

  • April 10, 2012
  • 07:40 AM
  • 730 views

Recent Advances in Genetic Research

by Jason Carr in Wired Cosmos

Those of you that read my posts regularly know that I believe the next (substantial) evolution of our species will involve either genetic modification, robotics, or a combination of both. So whenever I come across an interesting development happening in genetics research, AI, etc., I like to pass it along on here – even if [...]... Read more »

Zhiyu Peng, Yanbing Cheng, Bertrand Chin-Ming Tan, Lin Kang, Zhijian Tian, Yuankun Zhu, Wenwei Zhang, Yu Liang, Xueda Hu, Xuemei Tan.... (2012) Comprehensive analysis of RNA-Seq data reveals extensive RNA editing in a human transcriptome. Nature Biotechnology. DOI: 10.1038/nbt.2122  

  • April 8, 2012
  • 08:50 PM
  • 1,045 views

Simulations of Sand-Solid Impacts

by DJ Busby in Astronasty

Math and computer simulation wizzes at the University of North Carolina at Chapel Hill have just taken a nice step forward in granular simulation by approaching the problem with math heretofore not applied to dynamic granular modeling. It's really quite impressive.
Narain and Gola's work may be favorable over previous methods both due to its 1) accuracy, as well as 2) the ability to trade off quality-over-precision / precision-over-quality as an easily implemented option. Whenever computation is involved, that flexibility is very valuable.... Read more »

Narain, R., Golas, A., & Lin, M. (2010) Free-flowing granular materials with two-way solid coupling. ACM Transactions on Graphics, 29(6), 1. DOI: 10.1145/1882261.1866195  

  • April 6, 2012
  • 08:00 AM
  • 481 views

Brainbrawl round-up

by Zen Faulkes in NeuroDojo

Columbia University hosted a debate between Tony Movshon and Sebastian Seung last Monday, “Does the brain’s wiring make us who we are?” This became known informally as “brainbrawl.” I watched it livestreamed through the Radiolab site, and someone had the wherewithal to grab the video below (which Radiolab said they weren’t planning on archiving). Radiolab did archive the live chat here.



Where’s the fight?

As I predicted, it was a much more sedate affair than the “brainbrawl” moniker suggested. Seung set the tone in his first comments by pulling back from the big claims that he has made previously. Instead of discussing the nature of human identity (his TED talk) or immortality (his book), he was much more circumspect in outlining what a connectome could do for us. Near the end, he said, “All I want to do is map some connections!”

I liked this, actually. It is a far more sensible view of the promise of connectomes than we’ve sometimes seen. But yes, it would have been more fun if Seung had swung for the fences anf salked about uploading consciousness. At the end, co-moderator Robert Krulwich was apologizing for the lack of blood on the floor and the modesty of the speakers.

The fight (to the extent that there is a fight), then, is not about whether connectome research is feasible or useful, but about grand challenges and resources. Movshon nailed it when he said people are looking for “gigascience” projects, and that neuroscience has been a “cottage industry.” While nobody said it directly, “gigascience” is often about making a sales pitch. People want to be at the forefront of establishing big projects, because the prospect of money is there. Someone made a comment about keeping score, and Movshon said, “The NIH is.” Krulwich asked Movshon, “What’s your recruiting pitch? What do you tell the young mavericks to bring them into the field?” (Krulwich occasionally seems to confuse science with the Wild West; also here.)

It seems to me like Seung talks about connectomes as a way to pitch his research, which involves developing techniques to do high-throughput neuroanatomy (e.g., Jain et al. 2010). Faster, more automated electron microscopy would be a godsend. But I doubt Columbia UNiversity would have hosted a debate on, “Should we develop better EM?” One commenter on Twitter said:

I haven't heard alternatives to connectome that generate comparable ideas/excitement
I’m not sure we need it. It’s not as though neuroscience is suffering for people; remember, we hold the biggest scientific meeting in the world. personally am unconvinced that we need grand challenges in science. The history of science shows that the way you answer the big questions is by answering the small questions.

The invertebrate in the vertebrate brain

Invertebrates came up in the discussion a couple of times. Good circuit descriptions of the mammalian retina are actually fairly far along, and may be the first part of the brain for which we have a connectome. Seung commented, however, that the retina was like an invertebrate brain contained within a vertebrate brain. I think he was referring to several of the retinal cells being non-spiking, which is also true of the worm Caenorhabditis elegans.

On Twitter, Noah Gray said he didn’t think C. elegans informed the connectome debate at all, because it has non-spiking neurons. I disagree, but at the very least, it does point out the importance of the intrinsic properties of neurons. Movshon suggested that the reason that C. elegans had non-spiking neurons was that it was small. This is too simple an explanation. In crustaceans, you can find both spiking and non-spiking proprioceptive sensory neurons (e.g., Paul and Wilson, 1994), and there seems to be no readily apparent functional reason to favour one or the other.

As I’ve mentioned before, we have a connectome of C. elegans, and there was discussion about how useful it actually it. In his book, Seung admitted that the connectome hadn’t solved all the neurobiological research problems for that animal, but that it might be a special case. Seung tried to argue that C. elegans posed technical problems in recording from the neurons, but Carl Zimmer pointed out that if his hypothesis was true, that wouldn’t matter. I do think the moderators, and possibly Movshon, were too dismissive of what we have learned from the connectome of C. elegans. Seung is correct that the connectome is very important to guiding research in the nervous system of the animal.

What occurred to me, though, was that there might be another invertebrate example that shows the usefulness of determining neuronal circuits: the eye of the horseshoe crab.

Haldan K. Hartline won the Nobel prize for his work on horseshoe crab vision. Hartline was able to map connections between the photoreceptors in the crab’s eye. He had the advantages that the photoreceptors were spiking neurons (unlike in mammals), and that there was only one type of photoreceptor. That is, each was an interchangeable widget, and the properties of the neuron were largely determined by connections with other neurons. By figuring out the simple circuit in the eye, he showed how lateral inhibition was able to enhance contrast of edges. This is important because the horseshoe crab eye has very low resolution.

Many years later, Robert Barlow and colleagues used a connectome-like model to build a biologically realistic model of the horseshoe crab retina and the signal it sends to the brain (Passaglia et al. 1997; Barlow et al 2001). When it was published, it was the largest biologically realistic model that had been built. They showed that some previously puzzling features of the synapses, like neurons inhibiting themselves, did things like filter out flicker in the environment.

If the mammalian retina is an invertebrate nervous system trapped in a vertebrate brain, the horseshoe crab retina may be a vertebrate nervous system in an invertebrate brain.

Other tidbits

I liked Movshon’s comment that the brain is not a multi-purpose computer, but a specific purpose computer. That is, brains are the products of natural selection and need to do specific things very well. This contrasted with his earlier argument against studying connectomes by using a well-worm software analogy, used my many cognitive pyschologists: "Studying the hardware doesn’t tell you anything about the software.” True for your desktop computer, but the brain is not an electronic computer, as Seung noted.

The audience asked some very smart questions. I wished they’d had a chance to ask more.

References

Barlow R, Hitt J, Dodge F. 2001. Limulus vision in the marine environment. The Biological Bulletin 200(2): 169-176. DOI: 10.2307/1543311

Jain V, Seung HS, Turaga S. 2010. Machines that learn to segment images: a crucial technology for connectomics. Current Opinion in Neurobiology 20(5): 653-666. DOI: 10.1016/j.conb.2010.07.004

... Read more »

Barlow R, Hitt J, & Dodge F. (2001) Limulus vision in the marine environment. Biological Bulletin, 200(2), 169. DOI: 10.2307/1543311  

Passaglia C, Dodge F, Herzog E, Jackson S, & Barlow R. (1997) Deciphering a neural code for vision. Proceedings of the National Academy of Sciences of the United States of America, 94(23), 12649-54. PMID: 9356504  

  • April 4, 2012
  • 05:05 PM
  • 1,006 views

how to make random machines do your bidding

by Greg Fish in weird things

Long time readers probably noticed that the last month was a little off. Posts weren't coming as per the blog's natural rhythm and the annual April Fools gag was also absent. But there was a good reason for this, one I'd be happy to share with you if it wasn't for the fact that you [...]... Read more »

  • April 4, 2012
  • 02:32 PM
  • 808 views

Lubricating with silver nanoparticles for a better performance

by Cath in Basal Science (BS) Clarified

Oil changes are part of the routine maintenance for most drivers to keep their cars in good running condition. Engine oil reduces the friction between moving parts of the engine and minimize its wear and tear. Like cars, a variety of machinery—whether a printing press or an excavator—also relies on lubricants to prevent breakdowns and [...]... Read more »

  • April 4, 2012
  • 01:15 AM
  • 1,005 views

Finding Nanoscale Defects in Memory Devices

by Jason Carr in Wired Cosmos

The future of space travel, artificial intelligence, and AI are dependent upon our ability to store massive amounts of data in really small areas. It’s a complex undertaking to say the least. Fortunately, new research indicates that we may get there a bit faster by enabling engineers to discover defects that lead to memory defects [...]... Read more »

Lee, I., Obukhov, Y., Xiang, G., Hauser, A., Yang, F., Banerjee, P., Pelekhov, D., & Hammel, P. (2010) Nanoscale scanning probe ferromagnetic resonance imaging using localized modes. Nature, 466(7308), 845-848. DOI: 10.1038/nature09279  

  • April 3, 2012
  • 12:16 AM
  • 987 views

Next Generation Artificial Intelligence

by Jason Carr in Wired Cosmos

As computer scientists this year celebrate the 100th anniversary of the birth of the mathematical genius Alan Turing, who set out the basis for digital computing in the 1930s to anticipate the electronic age, they still quest after a machine as adaptable and intelligent as the human brain. Now, computer scientist Hava Siegelmann of the [...]... Read more »

  • April 2, 2012
  • 08:04 AM
  • 1,136 views

Open Data Manchester: Twenty Four Hour Data People

by Duncan Hull in O'Really?

According to Francis Maude, Open Data is the “next industrial revolution”. Now you should obviously take everything politicians say with a large pinch of salt (especially Maude) but despite the political hyperbole, when it comes to data he is onto something.... Read more »

  • March 31, 2012
  • 01:10 PM
  • 760 views

Automated Science, Deep Data, and the Paradox of Information

by Bradley Voytek in Oscillatory Thoughts

Note: this is was originally published by me over on the O'Reilly Radar.A lot of great pieces have been written about the (relatively) recent surge in interest in “big data” and "data science", but in this piece I want to address the importance of deep data analysis: what we can learn from the statistical outliers by drilling down and asking, “What’s different here? What’s special about these outliers and what do they tell us about our models and assumptions?”The reason that big data proponents are so excited about the burgeoning data revolution isn’t just because of the math. Don’t get me wrong, the math is fun, but we’re excited because we can begin to distill patterns that were previously invisible to us due to a lack of information.That’s big data.Of course, data are just a collection of facts; bits of information that are only given context--assigned meaning and importance--by human minds. It’s not until we do something with the data that any of it matters. You can have the best machine learning algorithms, the tightest statistics, and the smartest people working on them, but none of that means anything until someone makes a story out of the results.And therein lies the rub.Do all these data tell us a story about ourselves and the universe in which we live, or are we simply hallucinating patterns that we want to see?(Semi)Automated ScienceIn 2010, Cornell researchers Michael Schmidt and Hod Lipson published a groundbreaking paper in Science titled, "Distilling Free-Form Natural Laws from Experimental Data". The premise was simple, and it essentially boiled down to the question, "can we algorithmically extract models to fit our data?"So they hooked up a double pendulum--a seemingly chaotic system whose movements are governed by classical mechanics--and trained a machine learning algorithm on the motion data.Their results were astounding.In a matter of minutes the algorithm converged on Newton's second law of motion: f = ma. What took humanity tens of thousands of years to accomplish was completed on 32-cores in essentially no time at all.In 2011 some neuroscience colleagues of mine, lead by Tal Yarkoni, published a paper in Nature Methods titled "Large-scale automated synthesis of human functional neuroimaging data". In this paper the authors sought to extract patterns from the overwhelming flood of brain imaging research.To do this they algorithmically extracted the 3D coordinates of significant brain activations from thousands of neuroimaging studies, along with words that frequently appeared in each study. Using these two pieces of data along with some simple (but clever) mathematical tools, they were able to create probabilistic maps of brain activation for any given term.In other words, you type in a word such as "learning" on their website search and visualization tool, NeuroSynth, and they give you back a pattern of brain activity that you should expect to see during a learning task.But that's not all. Given a pattern of brain activation, the system can perform a reverse inference, asking, "given the data that I'm observing, what is the most probable behavioral state that this brain is in?"Similarly, in late 2010, my wife (Jessica Voytek) and I undertook a project to algorithmically discover associations between concepts in the peer-reviewed neuroscience literature. As a neuroscientist, the goal of my research is to understand relationships between the human brain, behavior, physiology, and disease. Unfortunately, the facts that tie all that information together are locked away in more than 21 million static peer-reviewed scientific publications.How many undergrads would I need to hire to read through that many papers? Any volunteers?Even more mind-boggling, each year more than 30,000 neuroscientists attend the annual Society for Neuroscience conference. If we assume that only two-thirds of those people actually do research, and if we assume that they only work a meager (for the sciences) 40 hours a week, that's around 40 million person-hours dedicated to but one branch of the sciences.Annually.This means that in the 10 years I've been attending that conference, more than 400 million person-hours have gone toward the pursuit of understanding the brain. Humanity built the pyramids in 30 years. The Apollo Project got us to the moon in about 8.So my wife and I said to ourselves, "there has to be a better way".Which lead us to create brainSCANr, a simple (simplistic?) tool (currently itself under peer review) that makes the assumption that the more often that two concepts appear together in the titles or abstracts of published papers, the more likely they are to be associated with one another.For example, if 10,000 papers mention "Alzheimer's disease" that also mention "dementia", then Alzheimer's disease is probably related to dementia. In fact, there are 17087 papers that mention Alzheimer's and dementia, whereas there are only 14 papers that mention Alzheimer's and, for example, creativity.From this, we built what we're calling the "cognome", a mapping between brain structure, function, and disease.Big data, data mining, and machine learning are becoming critical tools in the modern scientific arsenal. Examples abound: text mining recipes to find cultural food taste preferences, analyzing cultural trends via word use in books ("culturomics"), identifying seasonality of mood from tweets, and so on.But so what?Deep DataWhat those three studies show us is that it's possible to automate, or at least semi-automate, critical aspects of the scientific method itself. Schmidt and Lipson show that it is possible to extract equations that perfectly model even seemingly chaotic systems. Yarkoni and colleagues show that it is possible to infer a complex behavioral state given input brian data.My wife and I wanted to show that brainSCANr could be put to work for something more useful than just quantifying relationships between terms. So we created a simple algorithm to perform what we're calling "semi-automated hypothesis generation", which is predicated on a basic "the friend of a friend should be a friend" concept.In the example below, the neurotransmitter "serotonin" has thousands of shared publications with "migraine", as well as with the brain region "striatum". However migraine and striatum only share 16 publications.That's very odd. Because in medicine there is a serotonin hypothesis for the root cause of migraines. And we (neuroscientists) know that serotonin is released in the striatum to modulate brain activity in that region. Given that those two things are true, why is there so little research regarding the role of the striatum in migraines?Perhaps there's a missing connection?Such missing links and other outliers in our models are the essence of deep data analytics. Sure, any data scientist worth their salt can take a mountain of data and reduce it down to a few simple plots. And such plots are important because they tell a story. But those aren't the only stories that our data can tell us.For example, in my geoanalytics work as the Data Evangelist for Uber, I put some of my (definitely rudimentary) neuroscience network analytic skills to work to figure out how people move from neighborhood to neighborhood in San Francisco.At one point, I checked to see if men and women moved around the city differently. A very simple regression model showed that the number of men who go to any given neighborhood significantly predicts the number of woman who go to that same neighborhood.No big deal.But what's cool was seeing where the outliers were. ... Read more »

Yarkoni, T., Poldrack, R., Nichols, T., Van Essen, D., & Wager, T. (2011) Large-scale automated synthesis of human functional neuroimaging data. Nature Methods, 8(8), 665-670. DOI: 10.1038/nmeth.1635  

Ahn, Y., Ahnert, S., Bagrow, J., & Barabási, A. (2011) Flavor network and the principles of food pairing. Scientific Reports. DOI: 10.1038/srep00196  

Michel, J., Shen, Y., Aiden, A., Veres, A., Gray, M., , ., Pickett, J., Hoiberg, D., Clancy, D., Norvig, P.... (2010) Quantitative Analysis of Culture Using Millions of Digitized Books. Science, 331(6014), 176-182. DOI: 10.1126/science.1199644  

  • March 28, 2012
  • 05:01 PM
  • 1,006 views

Rare earth materials—the key to clean energy technology

by Cath in Basal Science (BS) Clarified

What are rare earth materials? Rare earth elements (REE) is a term generally used to describe the elements in the lanthanide series—the second last row of the periodic table. However yttrium and scandium, both Group IIIB transition metals, are often included as REE since they are naturally found together with other lanthanides and have similar [...]... Read more »

Wu, C., Yu, D., Law, C., & Wang, L. (2004) Properties of lead-free solder alloys with rare earth element additions. Materials Science and Engineering: R: Reports, 44(1), 1-44. DOI: 10.1016/j.mser.2004.01.001  

Alonso, E., Sherman, A., Wallington, T., Everson, M., Field, F., Roth, R., & Kirchain, R. (2012) Evaluating Rare Earth Element Availability: A Case with Revolutionary Demand from Clean Technologies. Environmental Science , 2147483647. DOI: 10.1021/es203518d  

Zawisza, B., Pytlakowska, K., Feist, B., Polowniak, M., Kita, A., & Sitko, R. (2011) Determination of rare earth elements by spectroscopic techniques: a review. Journal of Analytical Atomic Spectrometry, 26(12), 2373. DOI: 10.1039/c1ja10140d  

  • March 23, 2012
  • 12:05 PM
  • 656 views

Video: Bioinspired Robojelly Fuelled by Hydrogen

by United Academics in United Academics

It’s a small robot powered by hydrogen that resembles the movements of a jellyfish. ... Read more »

Tadesse, Y., Villanueva, A., Haines, C., Novitski, D., Baughman, R., & Priya, S. (2012) Hydrogen-fuel-powered bell segments of biomimetic jellyfish. Smart Materials and Structures, 21(4), 45013. DOI: 10.1088/0964-1726/21/4/045013  

  • March 21, 2012
  • 09:00 PM
  • 1,000 views

DNA nanoRobot — the next-generation of targeted drug delivery systems?

by Char in Basal Science (BS) Clarified

Targeted drug delivery is a highly sought after technology. Not only does it increase the efficiency of the drugs, but it may also reduce the side effects by localizing drug only where it’s needed. Shawn Douglas and co-researchers at the Harvard Hansjorg Wyss Institute for Biologically Inspired Engineering have created a nano-robot made of DNA strands [...]... Read more »

  • March 14, 2012
  • 09:30 PM
  • 1,155 views

Sonification: Listen To The Sun / Listen To CERN's LHC

by DJ Busby in Astronasty

Solar storm data has recently been translated through a process called sonification into audio. CERN uses this technology too. Here are videos and explanations for both.... Read more »

Alberto de Campo, Natascha Hormann, Harald Markum, Willibald Plessas, & Bianka Sengi. (2005) Sonification of lattice data: The spectrum of the Dirac operator across the deconfinement transition. Proceedings of Science. info:other/PoS: LAT2005-152

Katharina Vogt, Robert H¨oldrich, David Pirr`o, Martin Rumori, Stefan Rossegger, Werner Riegler, & Matevˇz Tadel. (2010) A SONIC TIME PROJECTION CHAMBER. SONIFIED PARTICLE DETECTION AT CERN. International Conference on Auditory Display. info:other/ISBN: 0-9670904-3-1

  • March 9, 2012
  • 01:46 PM
  • 762 views

Have connectionist models killed off beliefs?

by Ben in Critical Science

Connectionist models are widely held to have had a revolutionary impact upon cognitive science (Marcus, 2001). However, they are also employed in a highly controversial doctrine known as ‘eliminative materialism’, which claims the central posits of our common understanding of human psychology, including our conception of ‘beliefs’, are entirely false (Ramsey, Stitch & Garon, 1990). If such arguments are accepted, a radical reorientation is necessary in how we perceive and predict human behaviour, one that does not allow for human desire, intention or beliefs of any kind.... Read more »

  • March 8, 2012
  • 01:44 PM
  • 622 views

Getting to the Root of Microfluidics

by Hector Munoz in Microfluidic Future

It’s not hard to see that a lot here at Microfluidic Future focuses on the medical applications of microfluidics, but that doesn’t mean that I’m not interested in other ways the technology can be used. I love to see novel applications of microfluidics because progress for anyone is progress for everyone. That brings me to today’s post on the RootChip. If the name isn’t a total give away, I recently came across an article that uses a microfluidic chip to study the roots of plants. In the article, “The RootChip: An Integrated Microfluidic Chip for Plant Science” by Stephen Quake and other researchers from Stanford University, a device is developed to study the roots of Arabidopsis thaliana.... Read more »

Grossmann, G., Guo, W., Ehrhardt, D., Frommer, W., Sit, R., Quake, S., & Meier, M. (2011) The RootChip: An Integrated Microfluidic Chip for Plant Science. THE PLANT CELL ONLINE, 23(12), 4234-4240. DOI: 10.1105/tpc.111.092577  

  • March 7, 2012
  • 05:37 AM
  • 495 views

How do children learn and represent music?

by Henkjan Honing in Music Matters


Last month the peer-reviewed online journal Visions of Research in Music Education published a tribute to Jeanne Bamberger. See here for more information. ... Read more »

Desain, P., & Honing, H. (1988) LOCO: A Composition Microworld in Logo. Computer Music Journal, 12(3), 30. DOI: 10.2307/3680334  

  • March 6, 2012
  • 05:36 PM
  • 831 views

Rapid microfluidics-based measurements aid bitumen extraction

by Cath in Basal Science (BS) Clarified

Here’s a piece of engineering news to kick off Canada’s National Engineering month: A team of researchers led by Professor David Sinton of the University of Toronto’s Mechanical and Industrial Engineering Department developed a process to analyze the behavior of bitumen (an extra heavy oil) using a microfluidic chip, a tool commonly used in the [...]... Read more »

  • March 2, 2012
  • 11:32 AM
  • 891 views

5 Things to Know About SAMtools Mpileup

by Daniel Koboldt in Massgenomics

Next-generation sequencing instruments might be considered a disruptive technology. The incredible throughput of these machines, even 4-5 years ago, clearly mandated the development of a new generation of algorithms and data formats capable of storing, processing, and analyzing huge amounts of sequence data. One key achievement in next-generation sequencing bioinformatics was the specification of sequence [...]... Read more »

Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, Marth G, Abecasis G, Durbin R, & 1000 Genome Project Data Processing Subgroup. (2009) The Sequence Alignment/Map format and SAMtools. Bioinformatics (Oxford, England), 25(16), 2078-9. PMID: 19505943  

  • February 28, 2012
  • 11:06 PM
  • 917 views

“Power Felt”–a thermoelectric fabric that uses body heat to power electronics

by Cath in Basal Science (BS) Clarified

Did you know that while you’re sitting down and reading this post, your body can provide about 4-6 watts of power? That’s enough power to run a clock radio. Humans store energy from the food they consumed and some of this energy is then emitted as body heat. This wasted heat can be recovered at [...]... Read more »

Hewitt, C., Kaiser, A., Roth, S., Craps, M., Czerw, R., & Carroll, D. (2012) Multilayered Carbon Nanotube/Polymer Composite Based Thermoelectric Fabrics. Nano Letters, 2147483647. DOI: 10.1021/nl203806q  

Snyder, G., & Toberer, E. (2008) Complex thermoelectric materials. Nature Materials, 7(2), 105-114. DOI: 10.1038/nmat2090  

join us!

Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SMG Technology.

To learn more, visit seedmediagroup.com.