Drone patrols are nothing new. By now, they're fairly humdrum stuff come to think of it. But what about a drone patrol on an alien world, one that could potentially last for decades and bring us a constant stream of data on everything we wanted to know about the world in question? Well, that's the [...]... Read more »
Barnes, J., Lemke, L., Foch, R., McKay, C., Beyer, R., Radebaugh, J., Atkinson, D., Lorenz, R., Le Mouélic, S., Rodriguez, S.... (2011) AVIATR — Aerial Vehicle for In-situ and Airborne Titan Reconnaissance. Experimental Astronomy. DOI: 10.1007/s10686-011-9275-9
Chances are, your computer's current hard drive can store around 500 GB, and if you're a real video editing or graphics enthusiast, you either bought yourself, or customized your computer to have a 1 TB drive. But what if in the same space that your hard drive takes up now, you could host a multi-PB [...]... Read more »
I will return to blogging about theoretical computer science and algorithm-related mathematics next week, but I wanted to take a few minutes today to mention a rare research opportunity that has arisen as a result of the hack of the … Continue reading →... Read more »
Moshe Zviran, & William J. Haga. (1999) Password Security: An Empirical Study. Journal of Management Information Systems, 15(4), 161-185. info:/
A research group at the University of Indiana has developed a program called Truthy that allows anyone to track cases of "astroturfing" on twitter. Any search term can be entered into Truthy and the program will scan the Twitter API and build a model of how the search term originated. ... Read more »
Ratkiewicz,J. Conover,M. Meiss,M. Gonçalves,B. Patil,S. Flammini,A. Menczer, F. (2011) Truthy: Mapping the Spread of Astroturf in Microblog Streams. World Wide Web Conference Committee (IW3C2). . info:/
See the frozen puck that appears to be floating in mid-air? Physics is what is holding it there, well magnetic levitation to be exact.... Read more »
Tying up loose ends from my three posts in December about rectangulation of orthogonal polygons. Derrick Stolee requested in a comment a resolution of the computational complexity of the 3D version of the problem of decomposing a shape into the … Continue reading →... Read more »
Victor J. Dielissen, & Anne Kaldewaij. (1991) Rectangular partition is polynomial in two dimensions but NP-complete in three. Information Processing Letters, 38(1), 1-6. info:/10.1016/0020-0190(91)90207-X
We have left 2011 with a lot of exciting results from experiments. Neutrinos appear to move a bit faster than expected and Higgs provided some glimpses at CERN. Of course, this kind of Higgs appears somewhat boring at first being in the range of what Standard Model expected. But it is really too early to [...]... Read more »
Nuno Cardoso, & Pedro Bicudo. (2011) Generating SU(Nc) pure gauge lattice QCD configurations on GPUs with CUDA and OpenMP. arXiv. arXiv: 1112.4533v1
Axel Maas. (2011) Describing gauge bosons at zero and finite temperature. arXiv. arXiv: 1106.3942v2
You can now add wearable human-motion sensor to the list of uses for carbon nanotubes. Researchers at the National Institute of Advanced Industrial Science and Technology (AIST) in Tsukuba, Japan have created a wearable electronic sensor that can detect human motions such as breathing patterns. The wearable sensor was made using stretchable carbon nanotube films. [...]... Read more »
Lipomi, D., Vosgueritchian, M., Tee, B., Hellstrom, S., Lee, J., Fox, C., & Bao, Z. (2011) Skin-like pressure and strain sensors based on transparent elastic films of carbon nanotubes. Nature Nanotechnology, 6(12), 788-792. DOI: 10.1038/NNANO.2011.184
Yamada, T., Hayamizu, Y., Yamamoto, Y., Yomogida, Y., Izadi-Najafabadi, A., Futaba, D., & Hata, K. (2011) A stretchable carbon nanotube strain sensor for human-motion detection. Nature Nanotechnology, 6(5), 296-301. DOI: 10.1038/nnano.2011.36
According to the New England Journal of Medicine, after thirty years of silence, authors of a standard clinical psychiatric bedside test have issued take down orders of new medical research.... Read more »
Invisibility is something we often see on TV shows and in comic books. We’ve always thought it’s unattainable, at least not in the foreseeable future. However, two groups of scientist have shown this past year two promising methods of achieving “invisibility” using some advanced materials.... Read more »
Aliev, A., Gartstein, Y., & Baughman, R. (2011) Mirage effect from thermally modulated transparent carbon nanotube sheets. Nanotechnology, 22(43), 435704. DOI: 10.1088/0957-4484/22/43/435704
Chen X, Luo Y, Zhang J, Jiang K, Pendry JB, & Zhang S. (2011) Macroscopic invisibility cloaking of visible light. Nature communications, 176. PMID: 21285954
Buffer Multimodality or using a combination of visual, auditory, haptic,and other sensory modalities in the presentation of knowledge in serious gaming improves learning outcome. Interactivity or the communication between player and the digital gaming system in serious gaming also improves learning outcome. But these are two design elements and not psychological attributes of users of [...]
No related posts.... Read more »
Lee, Y., Heeter, C., Magerko, B., & Medler, B. (2011) Gaming Mindsets: Implicit Theories in Serious Game Learning. Cyberpsychology, Behavior, and Social Networking, 2147483647. DOI: 10.1089/cyber.2011.0328
Ritterfeld, U., Shen, C., Wang, H., Nocera, L., & Wong, W. (2009) Multimodality and Interactivity: Connecting Properties of Serious Games with Educational Outcomes. CyberPsychology , 12(6), 691-697. DOI: 10.1089/cpb.2009.0099
In this third (and final) post on polygon rectangulation, I will consider how to find the rectangulation of minimum total length for an orthogonal polygon. In part one of this short series, we considered rectangulations with a minimum number of … Continue reading →... Read more »
Andrzej Lingas, Ron Y. Pinter, Ronald R. Rivest, & Adi Shamir. (1982) Minimum edge length partitioning of rectilinear polygons. Proceedings of the 20th Allerton Conference on Communication, 53-63. info:/
In a nondescript office park outside Vancouver with views of snow capped mountains in the distance is a mirrored business park where very special work is being done. The company is D-Wave, the quantum computing company. D-Wave's mission is to build a computer which will solve humanity's grandest challenges.D-Wave aims to develop the first quantum computer in the world, perhaps they already have. The advent of quantum computers would be a sea change in the world that would allow for breaking of cryptography, better artificial intelligence, and exponential increases in computing speed for certain applications. The idea for quantum computers has been bubbling since Richard Feynman first proposed that the best way to simulate quantum phenomena would be with quantum systems themselves, but it has been exceedingly difficult to engineer a computer than can manipulate the possibilities of quantum information processing. Hardly a decade ago D-Wave began with a misstep which is the origin of their name. D-Wave got its name from their first idea which would have used yttrium barium copper oxide (YBCO) which is a charcoal looking material with a superconducting temperature above that of the boiling point of liquid nitrogen. This means that YBCO is the standard science lab demonstration of superconducting magnetic levitation. Ultimately the crystalline structure of YBCO was found to be an imperfect material, but the cloverleaf d-wave atomic orbital that lends YBCO its superconducting properties stuck as D-Wave's name. The vision of D-Wave did not change, but their approach did. They realized they would have to engineer and build the majority of the technology necessary to create a quantum computer themselves. They even built built their own superconducting electronics foundry to perform the electron beam lithography and metallic thin film evaporation processes necessary to create the qubit microchips at the heart of their machine.I recently got to visit D-Wave, the factory of quantum dreams, for myself. The business park that D-Wave is in is so nondescript that we drove right by it at first. I was expecting lasers and other blinking lights, but instead our University of Washington rented van pulled into the wrong parking lot which we narrowly reversed out of. In the van were several other quantum aficionados, students, and professors, mostly from computer science who were curious at what a quantum computer actually looks like. I am going to cut the suspense and tell you now that a quantum computer looks like a really big black refrigerator or maybe a small room. The chip at the heart of the room is cooled to a few milikelvin, colder than interstellar space, and that is where superconducting circuits count electric quantum sheep. The tour began with us milling around a conference room and our guide, a young scientist and engineer, was holding in his hand a wafer which held hundreds of quantum processors. I took a picture and after I left that conference room they did not let me take any more pictures.Entering the laboratory it suddenly dawned on me that this wasn't just a place for quantum dreams it was real and observable. The entire notion of a quantum computer was more tangible. A quantum computer is a machine which uses quantum properties like entanglement to perform computations on data.The biggest similarity between a quantum computer and a regular computer is that they both perform algorithms to manipulate data. The data, or bits, of a quantum computer are known as qubits. A qubit is not limited to the values of 0 or 1 as in a classical computer but can be in a superposition of these states simultaneously. Sometimes a quantum computer doesn't even give you the same answer to the exact same question. Weird. The best way to conceive of a quantum computing may be to imagine a computation where each possible output of the problem has either positive or negative probability amplitudes (a strange quantum idea there) and when the amplitudes for wrong answers cancel to zero and right answers are reinforced.The power of quantum computers is nicely understood within the theoretical framework of computational complexity theory. Say for example that I give you the number 4.60941636 × 1018 and ask for the prime factors of this number. Now if someone were to give you the prime factors you could verify them as correct very quickly, but what if I asked you to generate the prime factors for me (I dare you. I have the answer. I challenge you). The quintessential problem here is the P versus NP question which asks whether if a problem can be verified quickly can it also be solved quickly. Quickly is defined as polynomial time meaning that the algorithm scales as the number of some inputs to some power. Computational complexity theory basically attempts to categorize different kinds of problems depending on how fast a solution can be found as the size of the problem grows. A P class problem is one in which the solution can be found within polynomial time. A NP class problem is one in which the solution can be verified in polynomial time. So if I ask you for the prime factors of my number above that is an NP problem because given the numbers you could verify the answer quickly, but it would be very difficult to calculate the numbers just given the number. It is an open question, but it appears likely that all P problems are a subset of NP. This means that problems verifiable in polynomial time are not necessarily solved in polynomial time. The issue is that for some very interesting problems in the real world we could verify the answer if we stumbled upon it, but we won't even be able stumble upon the answer in a time shorter than the age of the universe with current computers and algorithms. What we know we know and what we think we know is a sea of confusion, but the popular opinion and where people would take their wagers is that P is not equal to NP.Suddenly, with mystique and spooky actions at a distance, quantum computing comes swooping in and claims to be able to solve some NP problems and all P problems very quickly. A general quantum computer would belong to the complexity class of BQP. There is a grand question at hand, is BQP in NP? (More generally, is BQP contained anywhere in the polynomial hierarchy? The polynomial hierarchy is a complexity class which generalizes P and NP problems to a particular kind of perfect abstract computer with the ability to solve decision problems in a single step. See this paper here on BQP and the Polynomial Hierarchy by Scott Anaronson who is a outspoken critic of D-Wave) At this time we cannot even claim to have evidence that BQP is not part of NP, but most scientists close to the problem think that BQP is not a subset of NP. Quantum computing researchers are trying to get better evidence that quantum computers cannot solve NP-complete problems in polynomial time (if NP was a subset of BQP then the polynomial hierarchy collapses). A reasonable wager I would take is that P is a (proper) subset of BQP and BQP is itself is a (proper) subset of NP. This claim has not been rigorously proved but it is suspected to be true and further there are some NP problems which it has been shown to be true for such as prime factorization and some combinatoric problems.There might be an elephant in the room here. The D-Wave architecture is almost certainly attacking a NP complete problem and reasonable logic says that quantum computers will solve P problems and some NP problems, but not NP complete problems (this is also not proven, but suspected). An NP complete problem is a problem in which the time it takes to compute the answer may reach into millions or billions of years even for moderately large versions of the problem. Thus we don't know if this particular quantum computer D-Wave has built even allows us to do anything efficiently we couldn't already do on a classical computer efficiently; it doesn't seem to be a BQP class computer thus it cannot for example solve prime factorization cryptography problems. So, yes it is a quantum machine, but we don't have any evidence it is an interesting machine. At the same time we don't have any evidence it is an uninteresting machine. It is not general purpose enough to be clear it a a big deal, nor is it so trivial it is totally uninteresting.The D-Wave lab was bigger than I expected and it was at once more cluttered and more precise than I thought it would be. It turns out the entire process of quantum computing follows this trend. There are a lot of factors they contend with and on the tour I saw people dead focused with their eyes on a microscope executing precise wiring, coders working in pairs, theoreticians gesturing at a chaotic white board, and even automated processes being carried on by computers with appropriately looking futuristic displays. The engineering problems D-Wave faces include circuit design, fabrication, cryogenics, magnetic shielding and so on. There is too much to discuss here so I will focus on what I think are scientifically the two most interesting parts of the D-Wave quantum computer which are the qubit physics and the quantum algorithm which they implement; in fact these two par... Read more »
Harris, R., Johansson, J., Berkley, A., Johnson, M., Lanting, T., Han, S., Bunyk, P., Ladizinsky, E., Oh, T., Perminov, I.... (2010) Experimental demonstration of a robust and scalable flux qubit. Physical Review B, 81(13). DOI: 10.1103/PhysRevB.81.134510
Harris, R., Johnson, M., Han, S., Berkley, A., Johansson, J., Bunyk, P., Ladizinsky, E., Govorkov, S., Thom, M., Uchaikin, S.... (2008) Probing Noise in Flux Qubits via Macroscopic Resonant Tunneling. Physical Review Letters, 101(11). DOI: 10.1103/PhysRevLett.101.117003
Sometimes the answers are right in front of your eyes. In a quest to develop lightweight structural materials with excellent strength and toughness (damage resistance), researchers have turned to the abalone shell for inspiration.
... Read more »
"This is very beautiful. It is neat, it is modern technology, and it is fast. I am just wondering very seriously about the biological validity of what we are doing with this machine." - Melvin Moss, 1967*"This machine" to which Moss referred nearly 50 years ago was not a contraption to clone a Neandertal or a Godzilla-like MechaGodzilla, but a computer. Along these lines, a paper came out recently describing a new, automated method for analyzing (biological) shapes, and while I think the method is pretty sweet, I think future researchers employing it should keep Moss's monition in mind.Doug Boyer and colleagues (2011) present "Algorithms to automatically quantify the geometric similarity of anatomical surfaces." It seems the main goals of the study were to make shape analysis  faster and  easier for people who don't otherwise study anatomy (such as geneticists), making it possible  to amass large phenotypic datasets comparable to the troves of genetic data accumulated in recent years. Using some intense math that's way over my head, the computer algorithm takes surface data (acquired through CT or laser scans) of a pair of specimens and automatically fits these forms with a "correspondence map" linking geometrically (and not necessarily biologically) homologous features between the two. It then uses the map to fit landmarks (a la geometric morphometrics) which are used to calculate the shape difference metric between individuals in the pairings.See at the right just how pretty it is! The authors posit that this technique could be used with genetic knock-out studies to assess how certain genes affect the development of bones and teeth, or to model the development of organs. That certainly would be useful in biomedical and evo-devo research.But while I appreciate the automated-ness of the procedure, I don't think we can simply write off the role of the biologist in determining what features are homologous, in favor of a computer. The paper itself illustrates this nicely. The authors state that there is debate about the origins of a cusp on the molar tooth of the sportive lemur (Lepilemur) - is it the same as the entoconid of the living mouse lemur, or the enlarged metaconid of the extinct "koala lemur"? Their automated algorithm can map the sportive lemur's mystery cusp to match either alternative scenario. It is the external paleontological and phylogenetic evidence, not the intrinsic shape information, that renders the alternative scenario more plausible.So let me reiterate that I think this paper presents an important step for the study of the biology of form, or the form of biology. Automating the analysis of form will certainly expedite studies of large datasets (not to mention freeing up the time of hordes of research assistants). But I hope that researchers employing this procedure will have a little Mossian Angel (poor play on "guardian angel," sorry) on their shoulders, reminding them that the algorithm won't necessarily show them homology better than their own experience. And I hope all biologists have this Mossian Angel there, reminding them that even though this method is "neat ... modern technology, and ... fast," it may not be the most appropriate method for their research question.ReferencesBoyer, D., Lipman, Y., St. Clair, E., Puente, J., Patel, B., Funkhouser, T., Jernvall, J., & Daubechies, I. (2011). Algorithms to automatically quantify the geometric similarity of anatomical surfaces Proceedings of the National Academy of Sciences, 108 (45), 18221-18226 DOI: 10.1073/pnas.1112822108*This quote comes from a discussion at the end of a symposium: Cranio-Facial Growth in Man (1967). RE Moyers and WM Krogman, editors. New York: Pergamon Press.... Read more »
Boyer, D., Lipman, Y., St. Clair, E., Puente, J., Patel, B., Funkhouser, T., Jernvall, J., & Daubechies, I. (2011) Algorithms to automatically quantify the geometric similarity of anatomical surfaces. Proceedings of the National Academy of Sciences, 108(45), 18221-18226. DOI: 10.1073/pnas.1112822108
Apps, apps and more apps. Software is everything and everything runs on software. Almost every industry in the U.S. has been disrupted by software. The health care field is not one of them. Easily accessible consumer information makes everyone a little bit doctor. Emerging portable diagnostic devices will strengthen the transition. Are we up to it? Not yet. ... Read more »
This post is the second in a series on polygon rectangulation. In my previous post, I discussed methods to decompose an orthogonal polygon into a minimum number of rectangles. (See that post for definitions and motivation.) In my next post, … Continue reading →... Read more »
O'Rourke, J., & Tewari, G. (2004) The structure of optimal partitions of orthogonal polygons into fat rectangles. Computational Geometry, 28(1), 49-71. DOI: 10.1016/j.comgeo.2004.01.007
J. O'Rourke, & G. Tewari. (2002) Partitioning orthogonal polygons into fat rectangles in polynomial time. Canadian Conference on Computational Geometry, 97-100. info:/
A paper online at Bioinformatics describes our flagship algorithm for detecting somatic point mutations in whole-genome sequencing of tumor samples. This freely available software package, called SomaticSniper, performs a Bayesian comparison of the genotype likelihoods in tumor and normal samples at every [covered] position in the genome. Overview Documentation Install The study includes a detailed [...]... Read more »
Larson, DE., Harris, CC., Chen, K., Koboldt, DC., Abbott, TE., Dooling, DJ., Ley, TJ., Mardis, ER., Wilson, RK., & Ding, L. (2011) SomaticSniper: Identification of Somatic Point Mutations in Whole Genome Sequencing Data. Bioinformatics, 1. info:/doi: 10.1093/bioinformatics/btr665
A new paper provides empirical evidence for something that many people, like Edward Tufte, have been saying for years: graphs and figures are better than tables.Cook and Teo took the results of statistical simulations, and presented them to people in the form of graphs or tables. Everyone were able to answer questions about the data more quickly using a graph. Less experienced people (i.e., undergraduates compared to postgraduates) were able to make more accurate statements about the results when presented in a table rather than a graph.They note that many journals print tables that make matters even worse. Tables often have far too many significant digits, and readers are often asked to make comparisons horizontally, rather than vertically.If you are thinking of putting a table on your poster: burn it.ReferenceCook A, Teo S. 2011. The communicability of graphical alternatives to tabular displays of statistical simulation studies. PLoS ONE 6(11): e27974. DOI: 10.1371/journal.pone.0027974Photo by cranky messiah of Flickr; used under a Creative Commons license.... Read more »
Cook A, & Teo S. (2011) The communicability of graphical alternatives to tabular displays of statistical simulation studies. PLoS ONE, 6(11). DOI: 10.1371/journal.pone.0027974
What do you think of when you hear the word plastic? Flexible? Recyclable? Strong?
We typically associate material characteristics with their applications. Plastics used in safety helmets have very good impact resistance but those used for plastic bottles are chemically resistant. Even with the wide range of plastics available, recyclable plastics are typically softer than their harder non-recyclable counterparts. Wouldn’t it be great if we could pick and choose the properties we want in a material; for example, a durable plastic that’s also recyclable?... Read more »
Montarnal D, Capelot M, Tournilhac F, & Leibler L. (2011) Silica-like malleable materials from permanent organic networks. Science (New York, N.Y.), 334(6058), 965-8. PMID: 22096195
Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.
If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.
Research Blogging is powered by SMG Technology.
To learn more, visit seedmediagroup.com.