Sign up FAST! Login

Have YOU actually ever met anyone named McLovin? 10 of the Coolest Big Data Projects


THE 5 COOLEST EU BIG DATA PROJECTShttp://www.cbronline.com/news/tech/software/businessintelligence/the-5-coolest-eu-big-data-projects-4340683

SUPERBAD

From virtual reality to treating brain trauma.

The EU is more than a tangle of bureaucracy and wildly varying accents, you know. In fact, it funds many of our most exciting technology projects.

While the likes of Google, Apple, and Cisco might be the names most associated with the technologies of tomorrow, the European Commission's pockets are more than deep enough to throw cash at a variety of future tech schemes.

And it does just that, helping researchers explore ways to improve millions of lives around Europe.

With the sheer amount of data being generated today, big data is inevitably a part of that, and we've found five projects the EU is bankrolling to help change our lives in both big and small ways.

Virtual realityEU researchers have recognised there's so many billions of bytes of data being generated every day that we have trouble even knowing what to do with it all. That's where CEEDs comes in.

The acronym stands for the Collective Experience of Empathic Data Systems. Admittedly, that's not very catchy, but it aims to create a method to not only present data in an attractive way to you, but also to be able to constantly change that presentation to prevent brain overload.

CEEDs' hope is to present big data in an interactive environment that will help us use it to generate new ideas more quickly. To do this, they've built a virtual reality "experience induction machine" in Barcelona that lets users 'step inside' large datasets.

This machine records your heart rate, eye movements and gestures to interpret your reactions to the data, changing the display of the datasets accordingly.

Jonathan Freeman, Professor of Psychology at Goldsmiths, University of London and coordinator of CEEDs, explains: "The system acknowledges when participants are getting fatigued or overloaded with information. And it adapts accordingly. It either simplifies the visualisations so as to reduce the cognitive load, thus keeping the user less stressed and more able to focus. Or it will guide the person to areas of the data representation that are not as heavy in information."

CEEDs says applications for the virtual reality display range from museum exhibitions to inspection of satellite imagery to astronomy, economics and historical research.

Treating peopleUsing big data in the health sector can actually help save lives: better data analysis can lead to better decision making by identifying trends and correlations previously missed.

One €3m EU project has produced a tool to help people with brain trauma, by collecting data from hundreds of patients who have suffered brain trauma to build software that improves diagnoses and predicts treatment outcomes.

The TBI Care project is working on a tool to allow doctors to enter data from emergency department tests to predict the best treatment for their brain injury.

Dr Mark van Gils, TBI Care's scientific coordinator, said: "Patients are tested for many different things when they arrive at an emergency department. The care team would look at their awareness and reactivity, and at how much oxygen is in their blood, for example.

"They also explore the potential of more sophisticated measurements - for example testing for proteins that indicate different types of damage to the patient's brain tissue in their circulation, and using imaging to look for internal bleeding. We want to see which tests give the best indicators of the patient's likely outcome.

No more traffic jams?Big data is already being used to manage traffic flows, with data being gathered through sensors, GPS information and social media to plan routes and co-ordinate different modes of transport.

A €3.6m project called Viajeo consisted of 25 companies that, over between 2009 and 2012, aimed to develop an open platform for transport services. That involved the integration of more prosaic traffic management data such as amounts of traffic at fixed locations and information about cars as they travelled along, derived from GPS.

As a result of VIAJEO's work, transport providers and managers have been able to implement new traffic management services specific to the cities of Athens, Beijing, Sao Paulo and Shanghai.

Cutting energy consumptionReducing the amount of energy we use is a key priority for most countries, as the world struggles with climate change and global warming. By making more data available to researchers, they can better inform policymakers' decisions on how to tackle environmental issues.

That's how we've ended up with a project to design software that finds the best places to build wind farms - often derided as noisy eyesores. The main objective of the Sopcawind project is to define what information must be factored in to come up with the answer.

Creating the big data economyPossibly the most ambitious EU-funded project, the Big Data Public Private Forum has the lofty aim of building an industry around big data. Quite what that involves even the forum isn't entirely certain, it appears.

However, the project has some impressive partners already, with Unify (formerly Siemens), Atos, as well as Germany's Open Knowledge Foundation and a string of universities.

5 Big Data projects that could change your life

Real-world Big Data projects that are already paying rewards.

http://www.networkworld.com/article/2459280/big-data-business-intelligence/5-big-data-projects-that-could-change-your-life.html

Most over-hyped technology trends wear out their welcome pretty quickly, which should make skeptics among us wary about Big Data. However, while Big Data is being touted as the latest trend that will change the world, the skeptics aren’t as, well, skeptical as they were about cloud and social.

That’s probably because Big Data is generating real-world wins for the companies embracing it. Already, Big Data analytics is starting to fundamentally change such disparate disciplines as pharmaceutical research, sales and marketing, and product development.

Many use cases, such as smart cities and driverless cars, even get us excited about a Jetsons-like existence where the world around us seems to anticipate our needs. Those scenarios may be the future of Big Data, but they’re not the “now” of Big Data.

“There’s a big difference between what is technologically feasible and what is practical,” says Don DeLoach, CEO of Infobright, a data analytics company. “Look at two trends driving Big Data: Internet of Things and Machine to Machine communications. Both have been around for a long time, but the increasing sophistication of sensors and a corresponding drop in price, along with the proliferation of various wireless communications options, means that what was once technologically possible in theory is now becoming practical.”

MORE ON NETWORK WORLD: Big data's biggest challenges

Some of our most ambitious Big Data dreams just haven’t entered into the realm of the practical yet. The technology is there for a self-driving car, but the infrastructure is not. Yet, the now is still pretty compelling.

“If you want to keep track of what’s really happening with Big Data, just follow the money,” DeLoach said. “Where ROI is the most obvious, that’s where people will invest.” And invest they have.

The ROI for Big Data in health care, vehicle telematics, and online marketing is already clear. That doesn’t mean we won’t eventually see driverless cars and super-smart cities. It just means they’re not yet practical enough to attract large-scale investments.

Here are five Big Data projects that are straddling the line between the practical and the pie-in-the-sky possible, and these projects, or ones like them, could very well change your life:

The Human Genome Project revolutionizes medicine

When the Human Genome Project got off the ground in 1990, we didn’t think of it as a Big Data project, but that’s what it was. By the time a complete human genome was mapped in 2003, some of the precursors to the Big Data movement had already started to percolate in the tech world.

So, it’s no surprise that the health care and pharmaceutical sectors are two of the most aggressive early adopters of Big Data tools, since there’s already a successful track record in place.

The Human Genome Project has also illustrated a sort of Moore’s Law of Big Data. You can already get an incomplete, but useful, snapshot of your genome from sites like 23andMe for $100 or less, and the push to drive down the cost of mapping your entire personal genome for that same price is already well underway. Prices have fallen each and every year. You can map your complete genome now for somewhere between $1,000 and $5,000. Back in 2007, this would have cost you $1 million, minimum.

Emory University Hospital and IBM develop ICU of the future

Emory University Hospital is using software from IBM and Excel Medical Electronics (EME) for a research project that has a goal of creating advanced, predictive medical care for critical patients through real-time streaming analytics.

Emory is testing a new system that can identify patterns in physiological data in order to instantly alert clinicians to danger signs in patients. In a typical ICU, a dozen different streams of medical data light up the monitors at a patient’s bedside – including heart physiology, respiration, brain waves, and blood pressure. This constant feed of vital signs is transmitted as waves and numbers and routinely displayed on computer screens at every bedside. Currently, it’s up to doctors and nurses to rapidly process and analyze all this information in order to make medical decisions.

Today, any small deviation from the norm, which could be an early warning sign, usually goes unnoticed.

The system being piloted at Emory uses EME’s BedMasterEX, IBM InfoSphere Streams, and Emory’s analytics engine to collect and analyze physiological patient data in real time. The new system will enable clinicians to acquire, analyze, and correlate medical data at a volume more quickly than they could even dream of a few years back.

“Accessing and drawing insights from real-time data can mean life and death for a patient,” says Tim Buchman, MD, PhD, and director of critical care at Emory University Hospital. “Through this new system we will be able to analyze thousands of streaming data points and act on those insights to make better decisions about which patient needs our immediate attention and how to treat that patient. It’s making us much smarter in our approach to critical care.”

Penn State’s Salis Lab helps researchers engineer synthetic organisms

Howard M. Salis, an assistant professor in Penn State University’s chemical engineering department, taught himself how to code and built a high-performance computing web portal, the Salis Lab, that enables researchers in the synthetic biology and metabolic engineering fields to use computational methods to design synthetic organisms.

“Microorganisms are the best chemists on Earth,” Salis says. “If we learn to take advantage of them, we can manufacture a whole diversity of products. In the past, genetic engineering was more like tinkering and trial and error.”

In other words, genetic engineering was more like natural selection itself, random and slow, but with a much smaller pool of subjects.

“Synthetic biology, on the other hand, is more of an engineering discipline. We want to quantify everything. We develop bio-physical models that we can use to make quantitative predictions about what will happen when DNA mutates in various ways,” Salis explains.

Synthetic biology involves extremely complex algorithms, so the project is hosted on AWS Elastic Compute Cloud, which can scale up or down as needed. The number of possible mutations in a short DNA sequence is greater than the number of atoms in the universe. Salis Lab has become wildly popular with more than 2,000 biotechnology researchers designing over 30,000 synthetic DNA sequences through the web portal in the last two years.

The applications for this are as varied as the researchers’ imaginations. One goals is to figure out a way engineer microorganisms that will provide a fuel source economically competitive to the use of fossil fuels. A more mundane use case is developing pigments for blue jeans.

Even more amazing is the predictive power researchers can tap into. "Using our models, we can actually predict evolution," Salis says. "We can simulate the effect of DNA mutations to predict the most probably course evolution will follow."

Eventually, this could allow researchers to develop microorganisms that are resistant to evolution.

The possible use cases of this are staggering. There are billions of microorganisms in the world, and each has parts of its genome that we could potentially put to work for us. It’s an enormous Big Data challenge to sequence those genomes, quantify and catalog them, and, finally, predict how to combine them in useful ways. But it's a challenge that researchers like Salis are eager to tackle. 

Georgetown’s Global Insight Initiative tackles “Big Problems”

Georgetown University’s Global Insight Initiative pulls data from around the world to gain insights into societal trends. The Global Insight Initiative analyzes data, but first needs to pull it, organize it, then package it to answer very complex questions.

“The world is a really complex system; there are 7 billion people interacting and competing for resources,” says J.C. Smart, director of the Global Insight Initiative at Georgetown University. The world has 40,000 cities, 12 million miles of roads, 800 million automobiles, and on and on. “Understanding how those all interact, and how they’re all dependent on one another is a very complex system. It’s a system of systems. That is Big Data, but more to the point, when you’re looking at the planet, it is Big Knowledge.”

The Global Insight Initiative needed data integration tools to manage the data volume in order to improve their knowledge base. “The knowledge base, just to give you an estimate of the number of things we’re talking about, we’re talking about a trillion objects and a quadrillion relationships,” Smart explains.

Kapow Software worked with Georgetown University’s Global Insight Initiative to automate high-volume data integration in order to expand the Initiative’s knowledge base. This involves accessing 20,000+ web sources from 162 countries representing 42 native languages to look at the planet and derive that “Big Knowledge.” Before automation, this process was so manually intensive that it would take dozens of people to find, pull, and organize documents and other web artifacts. After that, where do you find the time or resources to analyze the collection of information?

The Global Insight Initiative used Kapow’s software to create automated data integration flows (which you can think of as info-gathering robots). Once deployed, these infobots let a single user (who need not have any coding skills) run and manage thousands of automated data integration applications at any time in order to explore an integrated view of what could be wildly disparate data.

Now, the Global Insight Initiative will try to find answers to really hard “Big Problems,” such as: How do we best deploy water resources? How do we minimize the spread of diseases? How do we manage power distribution? How do we manage locations of health-care clinics to offer access to as many people as possible, and how do we position physician resources when disasters and catastrophes hit?

LA ExpressPark seeks to ease congestion and reduce pollution

Los Angeles’ downtown district has experienced significant growth over the last decade, transforming itself from a part of the city best known for skid row to an entertainment and business hot spot. Along with the growth, however, came tremendous traffic clutter. As drivers search for open spaces, they often circle the block for 30 minutes or more.

“Parking in downtown Los Angeles had become an expensive game of chance,” says David Cummins, senior vice president and managing director, parking and justice solutions, Xerox.

Making matters worse, on-street parking prices at meters rarely matched demand. Prices were uniform within a given area and were often the same, or cheaper, than garages a few blocks away. According to research from UCLA professor Donald Shoup, as much as 74 percent of congestion in downtown areas is due to drivers hunting for street parking. In a city where people already drive too much, there was really no incentive for drivers to park farther away.

To better match supply and demand, and hopefully ease congestion along the way, the city brought in Xerox to develop the LA ExpressPark parking system. Xerox installed hockey-puck-sized sensors in parking spaces to detect vacancies. Then, to better align supply with demand, Xerox developed an algorithm-based dynamic pricing engine to raise fares in high-occupancy blocks (to encourage turnover) and lower fares on fairly empty blocks (to encourage people to go out of their way a little bit).

As a transplant to L.A., I’ve often puzzled over the fact that Angelenos would prefer to circle the block endlessly than simply park two blocks away and endure a five-minute walk. What didn’t occur to me is that part of the problem, apparently, was a lack of awareness. If people knew parking was convenient (and cheaper) two blocks away, more would take advantage of it.

What happened when supply and demand were brought in line was that rates were actually lowered for 60 percent of meters, while they went up on only 20 percent of them. (Others stayed the same.)

To direct drivers to those empty spaces, new variable message signs have been deployed, signs which can be updated automatically as conditions change. The information is also shared with smartphone apps, such as Parker and Park Me, as well as the L.A. City website. Soon, Xerox intends to push the data directly to the navigation systems of the vehicles, which would automatically direct drivers to the nearest open spot to their destinations and perhaps even automatically pay for the parking, as well.

Better yet, congestion has started to ease and should improve even more as drivers learn about LA ExpressPark. “Parking managers now have complete and immediate visibility into what is happening on streets in their city and can make data-driven decisions on everything from rate structures to meter collections. Merge technology combines multiple vendors – from violation tickets processing to maintenance crews to collection crews – so that everything is readily available for the parking authority. Using the data in this way improves performance and creates additional revenues,” Cummins explains.

Cummins notes that the early results from this program prove that data-driven decisions can help change wasteful driver behaviors in order to reduce congestion and pollution.

Jeff Vance is a Santa Monica-based writer. He's the founder of Startup50, a site devoted to emerging tech startups. Follow him on Twitter @JWVance, or reach him by email at [email protected]

Stashed in: Big Data!, Big Data

To save this post, select a stash from drop-down menu or type in a new one:

These projects sound great but what's the McLovin connection?

Just the titles of the articles: "5 Big Data projects that could change your life" and "THE 5 COOLEST EU BIG DATA PROJECTS" - so the image worked. But then it didn't make sense with the plain title, so I looked up the script. Then I just rolled with it.

Fogell: Yo guys! Sup?

Seth: Fogell, where have you been, man? You almost gave me a goddamn heart attack. Let me see it. Did you pussy out or what?

Fogell: No noooo, man. I got it; it is flawless. Check it!

Evan: [examining the fake ID] Hawaii. All right, that's good. That's hard to trace, I guess. Wait... you changed your name to... McLovin?

Fogell: Yeah.

Evan: McLovin? What kind of a stupid name is that, Fogell? What, are you trying to be an Irish R&B singer?

Fogell: Naw, they let you pick any name you want when you get down there.

Seth: And you landed on McLovin...

Fogell: Yeah. It was between that or Muhammed.

Seth: Why the FUCK would it be between THAT or Muhammed? Why don't you just pick a common name like a normal person?

Fogell: Muhammed is the most commonly used name on Earth. Read a fucking book for once.

Evan: Fogell, have you actually ever met anyone named Muhammed?

Fogell: Have YOU actually ever met anyone named McLovin?

Seth: No, that's why you picked a dumb fucking name!

Fogell: Fuck you.

Seth: Gimme that. All right, you look like a future pedophile in this picture, number 1. Number 2: it doesn't even have a first name, it just says "McLovin"!

Evan: What? One name? ONE NAME? Who are you? Seal?

Seth: Fogell, this ID says that you're 25 years old. Why wouldn't you just put 21, man?

Fogell: Seth, Seth, Seth. Listen up, ass-face: every day, hundreds of kids go into the liquor store with fake IDs, and every single one says they're 21. Pssh, how many 21 year olds do you think there are in this town? It's called fucking strategy, all right?

Evan: Stay calm, okay? Let's not lose our heads. It's... it's a fine ID; it'll... it's gonna work. It's passable, okay? This isn't terrible. I mean, it's up to you, Fogell. This guy is either gonna think 'Here's another kid with a fake ID' or 'Here's McLovin, a 25 year-old Hawaiian organ donor'. Okay? So what's it gonna be?

Fogell: [grinning] ... I am McLovin!

Seth: No you're not. No one's McLovin. McLovin's never existed because that's a made up dumb FUCKING FAIRY TALE NAME, YOU FUCK!

Here's the rest: http://www.imdb.com/title/tt0829482/quotes 

That is so awesome. :)

You May Also Like: