There’s an immense well of information humanity is currently sitting on and it’s only growing exponentially. To make sense of all the noise, whether we’re talking about apps like speech recognition, cosmic body identification or search engine results, highly complex algorithms that use less processing power by hitting the bull’s eye or as close as possible are warranted. In the future, such algorithms will be comprised of machine learning technology that gets smarter and smarter after each information parse; this will most likely employ quantum computing as well. Until then, we have to make use of conventional algorithms and a most exciting paper detailing such a technique was recently reported.
Smashing data – the bits and pieces that follow are the most important
Called ‘data smashing’, the algorithm tries to fix one major flaw in today’s information processing. Immense amounts of data are currently being fed in and while algorithms help us declutter, at the end of the day companies and governments still need experts to oversee the process and grant a much need human fine touch. Basically, computers are still pretty bad at solving complex patterns. Sure, they’re awesome for crunching the numbers, but in the end, humans need to compare the outputted scenarios and pick out the most relevant answer. As more and more processes are being monitored and fed into large data sets, however, this task is becoming ever more difficult and human experts are in low supply.
[ALSO READ] Breakthrough in computing: brain-like chip features 4096 cores, 1 million neurons, 5.4 billion transistors
The algorithm, developed by Hod Lipson, associate professor of mechanical engineering and of computing and information science, and Ishanu Chattopadhyay, a former postdoctoral associate with Lipson now at the University of Chicago, is nothing short of brilliant. It works by estimating the similarities between streams of arbitrary data without human intervention, and even without access to the data sources.
Basically, data is being ‘smashed’ with one another to tease out unique information by measuring what remains after each ‘collision’. The more info stands, the less likely it is it originated from the same streams.
Data smashing could open doors to a new body of research – it’s not just helping experts sort through data easier, it might also actually identify anomalies that are impossible to spot by humans in virtue of pure computing brute force. For instance, the researchers demonstrated data smashing using data from real-world problems, including detection of anomalous cardiac activity from heart recordings and classification of astronomical objects from raw photometry. Results showed that the info was on par with the accuracy of specialized algorithms and heuristics tweaked by experts to work.