Post List

Computer Science / Engineering posts

(Modify Search »)

  • December 5, 2011
  • 04:16 PM
  • 721 views

Breaking out of the mold—new recyclable plastic that’s stable when heated

by Cath in Basal Science (BS) Clarified

What do you think of when you hear the word plastic? Flexible? Recyclable? Strong?

We typically associate material characteristics with their applications. Plastics used in safety helmets have very good impact resistance but those used for plastic bottles are chemically resistant. Even with the wide range of plastics available, recyclable plastics are typically softer than their harder non-recyclable counterparts. Wouldn’t it be great if we could pick and choose the properties we want in a material; for example, a durable plastic that’s also recyclable?... Read more »

Montarnal D, Capelot M, Tournilhac F, & Leibler L. (2011) Silica-like malleable materials from permanent organic networks. Science (New York, N.Y.), 334(6058), 965-8. PMID: 22096195  

  • December 5, 2011
  • 11:47 AM
  • 702 views

Techie Ecstasy: Preview of SIGGRAPH Asia 2011

by Nsikan Akpan in That's Basic Science

Highlights from the forthcoming tech fest SIGGRAPH Asia 2011

1) An Iphone app that writes your autobiography from internet posts

2) A Cartography graphics tool for cardiologists

3) A Renaissance collage generator.

Plus a video of some great rendering software innovations.... Read more »

Zhu, B., Iwata, M., Haraguchi, R., Ashihara, T., Umetani, N., Igarashi, T., & Nakazawa, K. (2011) Sketch-based Dynamic Illustration of Fluid Systems. ACM Transactions on Graphics, 30(6), 1. DOI: 10.1145/2070781.2024168  

Huang, H., Zhang, L., & Zhang, H. (2011) Arcimboldo-like collage using internet images. ACM Transactions on Graphics, 30(6), 1. DOI: 10.1145/2070781.2024189  

  • December 5, 2011
  • 01:46 AM
  • 1,003 views

Students use of social media

by Dr Shock in Dr Shock MD PhD

Buffer We have been trying to use twitter during lectures, especially since the group is so large about 100 to 200 can only follow the lecture from another lecture room on a monitor. It wasn’t a success, the question time during and after the lecture was hardly used. Only 8-10 questions were proposed via twitter. [...]


No related posts.... Read more »

Giordano C, & Giordano C. (2011) Health professions students' use of social media. Journal of allied health, 40(2), 78-81. PMID: 21695367  

  • December 4, 2011
  • 07:31 AM
  • 1,046 views

why computers can’t predict revolutions

by Greg Fish in weird things

Back in September, news worldwide reported the results of a paper which claimed that a supercomputer had a knack for predicting revolutions and key global events, able to pick up on the events of Tahir square in Cairo and even get a fix on Osama bin Laden’s location. After reviewing the paper in question, I [...]... Read more »

Leetaru, K. (2011) Culturomics 2.0: Forecasting large-scale human behavior using global news media tone in time and space. First Monday, 16(9). info:/

  • December 2, 2011
  • 04:13 PM
  • 647 views

Polygon rectangulation, part 1: Minimum number of rectangles

by Aaron Sterling in Nanoexplanations

Over the next few posts, I will consider problems of polygon rectangulation: given as input an orthogonal polygon (all interior angles are 90 or 270 degrees), decompose into adjacent, nonoverlapping rectangles that fully cover .  Different problems impose different conditions … Continue reading →... Read more »

W.T. Liou, J.J.M. Tan, & R.C.T. Lee. (1989) Minimum Partitioning Simple Rectilinear Polygons in O(n log log n) Time. Symposium on Computational Geometry, 344-353. info:/10.1145/73833.73871

FERRARI, L., SANKAR, P., & SKLANSKY, J. (1984) Minimal rectangular partitions of digitized blobs. Computer Vision, Graphics, and Image Processing, 28(1), 58-71. DOI: 10.1016/0734-189X(84)90139-7  

  • December 1, 2011
  • 11:30 AM
  • 748 views

The privacy payoff

by David Bradley in Sciencetext

What aspects of online privacy do we worry about most? Seemingly, there are three major concerns: The first is that on the internet, data about us is collected permanently, of which we then lose control. Secondly, we are often poorly informed about how the data we provide is going to be used. Thirdly, handing over [...]Post from: David Bradley's Sciencetext Tech TalkThe privacy payoff
... Read more »

Irene Pollach, & Horst Treiblmaier. (2011) The influence of privacy concerns on perceptions of web personalisation. Int. J. Web Science, 1(1/2), 3-20. info:/

  • November 30, 2011
  • 09:10 AM
  • 692 views

Technology experts can detect if images are photoshopped or not

by United Academics in United Academics

Angelina Jolie rinkle free? Penelope Cruz a D-cup? Not anymore. Scientists have come up with a way to detect if photographs of celebrities or models have been airbrushed or not. Now they hope it will be used to provide a universal “health warning” on magazine images.... Read more »

Kee E, & Farid H. (2011) A perceptual metric for photo retouching. Proceedings of the National Academy of Sciences of the United States of America. PMID: 22123980  

  • November 30, 2011
  • 08:25 AM
  • 964 views

Video Tip of the Week: Phosida, a post-translational modification database

by Trey in OpenHelix

Over 2 years ago I did a tip of the week on Phosida. Phosida is a database of phosphorylation, acetylation, and N-glycosylation data. Since the last tip, Phosida has undergone significant growth and some changes, including the addition of much more data (80,000 phosphorylation, acetylation and N-glycosylated sites from 9 different species) and tools (prediction and [...]... Read more »

Gnad, F., Gunawardena, J., & Mann, M. (2010) PHOSIDA 2011: the posttranslational modification database. Nucleic Acids Research, 39(Database). DOI: 10.1093/nar/gkq1159  

  • November 30, 2011
  • 02:42 AM
  • 674 views

Technology experts can detect if images are photoshopped or not

by United Academics in United Academics

Angelina Jolie rinkle free? Penelope Cruz a D-cup? Not anymore. Scientists have come up with a way to detect if photographs of celebrities or models have been airbrushed or not. Now they hope it will be used to provide a universal “health warning” on magazine images.... Read more »

Kee E, & Farid H. (2011) A perceptual metric for photo retouching. Proceedings of the National Academy of Sciences of the United States of America. PMID: 22123980  

  • November 30, 2011
  • 01:22 AM
  • 1,051 views

Virtual Reality for Stress Management

by Dr Shock in Dr Shock MD PhD

Buffer Going to a relaxing zone in a natural park such as the river, waterfall, lake of garden with virtual reality and doing relaxing exercises supported by a relaxing narrative effectively reduces stress and anxiety. Virtual reality showed better improvements than video or audio although the latter two also reduced stress and anxiety. We found [...]


No related posts.... Read more »

  • November 29, 2011
  • 01:50 AM
  • 1,096 views

Online Disclosure greater than Offline Disclosure?

by Dr Shock in Dr Shock MD PhD

Buffer Most are afraid of greater online disclosure than offline disclosure. The computer luring us towards more information about ourselves than would probably be safe. Self-disclosure is the voluntary and verbal communication of personal information to a targeted recipient. It has three dimensions: frequency, breadth, and depth. Frequency of self-disclosure refers to the amount of [...]


No related posts.... Read more »

Nguyen, M., Bin, Y., & Campbell, A. (2011) Comparing Online and Offline Self-Disclosure: A Systematic Review. Cyberpsychology, Behavior, and Social Networking, 2147483647. DOI: 10.1089/cyber.2011.0277  

  • November 28, 2011
  • 08:56 PM
  • 911 views

The future of computation in drug discovery

by The Curious Wavefunction in The Curious Wavefunction

Computational chemistry as an independent discipline has its roots in theoretical chemistry, itself an outgrowth of the revolutions in quantum mechanics in the 1920s and 30s. Theoretical and quantum chemistry advanced rapidly in the postwar era and led to many protocols for calculating molecular and electronic properties which became amenable to algorithmic implementation once computers came on the scene. Rapid growth in software and hardware in the 80s and 90s led to the transformation of theoretical chemistry into computational chemistry and to the availability of standardized, relatively easy to use computer programs like GAUSSIAN. By the end of the first decade of the new century, the field had advanced to a stage where key properties of simple molecular systems such as energies, dipole moments and stable geometries could be calculated in many cases from first principles with an accuracy matching experiment. Developments in computational chemistry were recognized by the Nobel Prize for chemistry awarded in 1998 to John Pople and Walter Kohn.In parallel with these theoretical advances, another thread started developing in the 80s which attempted something much more ambitious- to apply the principles of theoretical and computational chemistry to complex systems like proteins and other biological macromolecules and to study their interactions with drugs. The practitioners of this paradigm wisely realized that it would be futile to calculate properties of such complex systems from first principles, thus leading to the initiation of parametrized approaches in which properties would be "pre-fit" to experiment rather than calculated ab initio. Typically there would be an extensive set of experimental data (the training set) which would be used to parametrize algorithms which would then be applied to unknown systems (the test set). The adoption of this approach led to molecular mechanics and molecular dynamics - both grounded in classical physics- and to quantitative structure activity relationships (QSAR) which sought to correlate molecular descriptors of various kinds to biological activity. The first productive approach to docking a small molecule in a protein active site in its lowest energy configuration was refined by Irwin "Tack" Kuntz at UCSF. And beginning in the 70s, Corwin Hansch at Pomona College had already made remarkable forays into QSAR.These methods gradually started to be applied to actual drug discovery in the pharmaceutical industry. Yet it was easy to see that the field was getting far ahead of itself and in fact even today it suffers from the same challenges that plagued it thirty years back. Firstly, nobody had solved the twin cardinal problems of modeling protein-ligand interactions. The first one was conformational sampling wherein you had to exhaustively search the conformation space of a ligand or protein. The second one was energetic ranking wherein you had to rank these structures, either in their isolated form or in the context of their interactions with a protein. Both of these problems remain the central problems of computation as applied to drug discovery. In the context of QSAR, spurious correlations based on complex combinations of descriptors can easily befuddle its practitioners and create an illusion of causation. Furthermore, there have been various long-standing problems such as the transferability of parameters from a known training set to an unknown test set, the calculation of solvation energies for even the simplest molecules and the estimation of entropies. And finally, it's all too easy to forget the sheer complexity of the protein systems we are trying to address which display a stunning variety of behaviors, from large conformational changes to allosteric binding to complicated changes in ionization states and interactions with water. The bottom line is that in many cases we just don't understand the system which we are trying to model well enough.Not surprisingly, a young field still plagued with multiple problems could be relied upon as no more than a guide when it came to solving practical problems in drug design. Yet the discipline saw unfortunate failures in PR as it was periodically hyped. Even in the 80s there were murmurs about designing drugs using computers alone. Part of the hype unfortunately came from the practitioners themselves who were less than cautious about announcing the strengths and limitations of their approaches. The consequence was that although there continued to be significant advances in both computing power and algorithms, many in the drug discovery community looked at the discipline with a jaundiced eye.Yet the significance of the problems that the field is trying to address means that it will continue to be promising. What's its future and what would be the most productive direction in which it could be steered? An interesting set of thoughts is offered in a set of articles published in the Journal of Computer-Aided molecular design. The articles are written by experienced practitioners in the field and offer a variety of opinions, critiques and analyses which should be read by all those interested in the future of modeling in the life sciences.Jurgen Bajorath from the University of Bonn along with his fellow modelers from Novartis laments the fact that studies in the field have not aspired to a high standard of validation, presentation and reproducibility. This is an important point. No scientific field can advance if there is wide variation in the presentation of the quality of its results. When it comes to modeling in drug discovery, the proper use of statistics and well-defined metrics has been highly subjective, leading to great difficulty in separating the wheat from the chaff and honestly assessing the impact of specific techniques. Rigorous statistical validation in particular has been virtually non-existent, with the highly suspect correlation coefficients being the most refined weapon of choice for many scientists in the field. An important step in emphasizing the virtue of objective statistical methods in modeling was taken by Anthony Nicholls of OpenEye Software who in a series of important articles laid out the statistical standards and sensible metrics that any well-validated molecular modeling study should aspire to. I suspect that these articles will go down in the annals of the field as key documents.In addition, as MIT physics professor Walter Lewin is fond of constantly emphasizing in his popular lectures, any measurement you make without knowledge of its uncertainty is meaningless. It is remarkable that in a field as fraught with complexity as modeling, there has been a rather insouciant indifference to the estimation of error and uncertainty. Modelers egregiously quote numbers involving protein-ligand energies, dipole moments and other properties to four or six figures of significance when ideally those numbers are suspect even to one decimal point. Part of the problem has simply been an insufficient grounding in statistics. Tying every number to its estimated error margin (if it can be estimated at all) will not only give experimentalists and other modelers an accurate feel for the validity of the analysis and the ensuing improvement of methods but will also keep semi-naive interpreters from being overly impressed by the numbers. Whether it's finance or pharmaceutical modeling, it's always a bad idea to get swayed by figures.Then there's the whole issue, as the modelers from Novartis emphasize, of spreading the love. The past few years have seen the emergence of several rigorously constructed datasets carefully designed to test and benchmark different modeling algorithms. The problem is that these datasets have been most often validated in an industry that's famous for its secrecy. Until the pharmaceutical industry makes at least some efforts to divulge the results of its studies, a true assessment of the value of modeling methods will always come in fits and starts. I have been recently reading Michael Nielsen's eye-opening book on open science, and it's startling to realize the gains in advancement of knowledge that can result from sharing of problems, solutions and ideas. If modeling is to advance and practically contribute to drug discovery, it's imperative for industry - historically the most valuable generator of any kind of data in drug discovery - to open its vaults and allow scientists to use its wisdom t... Read more »

  • November 28, 2011
  • 01:50 PM
  • 591 views

TED Talk: Oxytocin—the moral molecule

by Cath in Basal Science (BS) Clarified

In this TED talk Dr. Paul Zak, a professor at Claremont Graduate University in Southern California, describes how oxytocin is responsible for empathy and why it is the “moral molecule” in humans. I was intrigued by the title of this talk because I had always thought morality was something you learn and are not born [...]... Read more »

Baumgartner, T., Heinrichs, M., Vonlanthen, A., Fischbacher, U., & Fehr, E. (2008) Oxytocin Shapes the Neural Circuitry of Trust and Trust Adaptation in Humans. Neuron, 58(4), 639-650. DOI: 10.1016/j.neuron.2008.04.009  

Domes, G., Heinrichs, M., Michel, A., Berger, C., & Herpertz, S. (2007) Oxytocin Improves “Mind-Reading” in Humans. Biological Psychiatry, 61(6), 731-733. DOI: 10.1016/j.biopsych.2006.07.015  

Kosfeld, M., Heinrichs, M., Zak, P., Fischbacher, U., & Fehr, E. (2005) Oxytocin increases trust in humans. Nature, 435(7042), 673-676. DOI: 10.1038/nature03701  

Jorge Moll, Roland Zahn, Ricardo de Oliveira-Souza, Frank Krueger, & Jordan Grafman. (2005) The neural basis of human moral cognition. Nature Reviews Neuroscience, 799. info:/

ZAK, P., KURZBAN, R., & MATZNER, W. (2004) The Neurobiology of Trust. Annals of the New York Academy of Sciences, 1032(1), 224-227. DOI: 10.1196/annals.1314.025  

ZAK, P., KURZBAN, R., & MATZNER, W. (2005) Oxytocin is associated with human trustworthiness. Hormones and Behavior, 48(5), 522-527. DOI: 10.1016/j.yhbeh.2005.07.009  

  • November 28, 2011
  • 08:00 AM
  • 711 views

Oracles Past and Present: Our Means of Managing Information

by Krystal D'Costa in Anthropology in Practice

Our ability to find and share information today is potentially limitless. But how did we get here? From cave paintings to the iPad—how does human innovation bring us here? Go Ask the Oracle We live in an amazing time: We never have to wait to know. At this very moment you could be on a [...]









... Read more »

Hargittai, E. (2002) Second-level digital divide: Differences in people’s online skills'. First Monday, Peer-Reviewed Journal of the Internet., 7(4). info:/

  • November 22, 2011
  • 11:30 AM
  • 764 views

22 website quality markers

by David Bradley in Sciencetext

Many factors affect the web experience and perception of the quality of a website. Writing in the rather appropriately named International Journal of Information Quality, Jaikrit Kandari of the University of Nebraska, Lincoln and colleagues there and at The University of Texas at Arlington, have outlined 21 factors that could be used as a framework [...]Post from: David Bradley's Sciencetext Tech Talk22 website quality markers
... Read more »

Jaikrit Kandari, Erick C. Jones, Fiona Fui-Hoon Nah, & Ram R. Bishu. (2011) Information quality on the World Wide Web: development of a framework. Int. J. Information Quality, 2(4), 324-343. info:/

  • November 21, 2011
  • 09:35 PM
  • 533 views

Role of Porous Medium Modelling in Biothermofluids

by Arunn in nOnoScience (a.k.a. Unruled Notebook)

Biothermology or Bio- fluid flow and heat transfer is an important and developing subdivision of bioengineering. Seeking simplifications for biological processes that are inherently complex, is an exciting and useful multidisciplinary pursuit. Recently, I was invited to write a review article on the role of porous medium modelling in biothermofluids for the IISc Journal, a … Continue reading »... Read more »

Arunn Narasimhan. (2011) The Role of Porous Medium Modeling in Biothermofluids. Journal of the Indian Institute of Science, 91(3), 243-266. info:other/

  • November 16, 2011
  • 12:58 PM
  • 1,105 views

Renewable energy rises from the ashes

by Charles Harvey in Charles Harvey - Science Communicator

Old oil and gas wells might soon be reborn as environmentally friendly geothermal power generators. Over $36,000 of electricity could be generated from each retrofitted well. ... Read more »

  • November 16, 2011
  • 08:09 AM
  • 753 views

Video Tip of the Week: MapMi, automated mapping of microRNA loci

by Trey in OpenHelix

Today’s video tip of the week is on MapMi. This tool is found at EBI and was developed by the Enright lab. The purpose of this tool is a computational system for mapping of miRNAs within and across species. As the abstract of their recent paper says:  Currently miRBase is their primary repository, providing annotations [...]... Read more »

Guerra-Assuncao, J., & Enright, A. (2010) MapMi: automated mapping of microRNA loci. BMC Bioinformatics, 11(1), 133. DOI: 10.1186/1471-2105-11-133  

  • November 16, 2011
  • 05:28 AM
  • 857 views

New mobile battery is chargable within 15 minutes and lasts a week

by United Academics in United Academics

Fed up with re-charging your smart phone every single day? Good news: batteries for phones and laptops will soon be able to recharge ten times faster than they are today. Furthermore, these batteries hold a charge ten times larger than current technology allows.... Read more »

Xin Zhao, Cary M. Hayner, Mayfair C. Kung, & Harold H. Kung. (2011) In-Plane Vacancy-Enabled High-Power Si–Graphene Composite Electrode for Lithium-Ion Batteries. Advanced Energy Materials, 1(6), 1079-1084. info:/10.1002/aenm.201100426

  • November 15, 2011
  • 03:28 PM
  • 647 views

How to (hopefully) not drown in data

by Emma in we are all in the gutter

More is better, right? Bigger telescopes and bigger surveys are both undoubtedly good things, but to make the best use of these advances we need to be able to handle the corresponding increase in data flow, and subsequent pressure on the astronomical archives which are going to have to cope with it. This is a [...]... Read more »

G. Bruce Berriman, & Steven L. Groom. (2011) How Will Astronomy Archives Survive The Data Tsunami?. ACM Queue. arXiv: 1111.0075v1

join us!

Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SMG Technology.

To learn more, visit seedmediagroup.com.