A New Study Has Good News For Those Who Like Dark Humor

A New Study Has Good News For Those Who Like Dark Humor

If you find yourself constantly cracking wise during life’s darker moments, you may have a high IQ.

A new study in the journal Cognitive Processing found an interesting relationship between intelligence and dark humor.

Researchers from the Medical University of Vienna first administered a generalized IQ test to 156 people – both men and women at an average age of 33 and with numerous educational backgrounds. The participants were then tasked with reacting to 12 illustrations by German cartoonist Uli Stein. The subject of each cartoon was bleak or generally considered to be offensive, like one example that features a GP telling a pregnant woman: “To begin with, here is the good news: your child will always find a parking space.”

If you chuckled at that, chances are you enjoy dark humor, a trait often linked to people with gloomy personas but seldom associated with those who have a high IQ.

This new study, however, found that those who both understood and enjoyed the grim jokes not only had the highest IQ test results, but they were also better educated and scored lower for general aggression and negative moods.

The researchers found that those who despised the dark jokes had average IQ scores, along with the highest levels of aggression and most powerful negative moods. Those who moderately understood and enjoyed the dark humor also had average IQ scores, but they also proved only moderately aggressive and had a relatively positive outlook on life.

In a nutshell, while the grim nature of dark humor may turn some people off, being able to see through the darkness and appreciate the cleverness of the joke – burgeoned by a generally positive attitude about life – demands a higher level of cognitive ability.

So don’t feel too guilty about giggling during a particularly grisly round of Cards Against Humanity – but maybe refrain from bragging about it.

h/t IFLScience

The robotic arm that can pick your delicate fruit and veg

I’ve written a few times about the use of robotics in retail in the past few years, whether it’s the automated stock control system developed by Simbe Robotics, or even the fully automated store that recently opened in Sweden.

Of course, automation has been relatively easy to install in big box type environments that are found in Amazon warehouses, but it’s much harder when dealing with more delicate items, such as fruit and vegetable, that require more sensitive handling.

Ocado, the world’s largest online-only supermarket, have been experimenting with robotic picking and packing of shopping at its highly automated warehouse. The project, which is part of the SoMa Horizon 2020 framework program.

Collaborative research

One of the main challenges they have looked to overcome is the ability to manipulate delicate and unusually shaped items, such as those typically found in grocery stores. To overcome this challenge, the team use a compliant gripper alongside an industrial robot arm.

The gripper is designed to handle the full 48,000 or so items currently stocked by Ocado, and it will do so via compliant robotic hands that are specifically designed to handle fragile objects, even with minimal knowledge as to the shape of the item.

An example of a compliant gripper is the RBO Hand 2 developed by the Technische Universität Berlin (TUB). The gripper uses flexible rubber materials and pressurized air for passively adapting grasps which allows for safe and damage-free picking of objects. With seven individually controllable air chambers, the anthropomorphic design enables versatile grasping strategies.

Softly softly

Due to its compliant design, the robotic hand is highly under-actuated: only the air pressure is controlled, while the fingers, palm, and thumb adjust their shape to the given object geometry (morphological computation). This simplifies control and enables effective exploitation of the environment.

The Ocado Technology robotics team replicated a production warehouse scenario in order to evaluate the performance of the RBO Hand 2 for Ocado’s use case. The team mounted the soft hand on two different robot arms, a Staubli RX160L and a KUKA LBR iiwa14. Both of these arms can operate in the standard position controlled mode; in addition to this, the KUKA provides the capability of demonstrating a certain amount of software controlled compliance in the arm.

The experiments started with the simple scenario of grasping a single object from the example set using only the bottom of the tray. Initial results showed that the hand is able to successfully grasp a variety of shapes and the results suggested the chance of success increased when environmental constraints are being used effectively to restrict the movement of the object.

In the coming months, we plan to explore more complex scenarios, adding more objects in the IFCO, and introducing additional environmental constraints that could be exploited by a grasping strategy.

Check out the video below to see the robot grasping simple objects, such as an apple.

Groundbreaking system allows locked-in syndrome patients to communicate

Using a device which detects patterns in brain activity, patients paralysed by ALS can answer ‘yes’ or ‘no’ – and tell doctors they are ‘happy’ with life

Shown here on a model, the cap uses infrared light to spot variations in blood flow in different regions of the brain. A computer learns to distinguish the blood flow patterns for “yes” and “no” for each patient.

Shown on a model, the cap uses infrared light to spot variations in blood flow in different regions of the brain. A computer then learns to distinguish the patterns for “yes” and “no” for each patient. Photograph: Wyss Center,

Doctors have used a brain-reading device to hold simple conversations with “locked-in” patients in work that promises to transform the lives of people who are too disabled to communicate.

The groundbreaking technology allows the paralysed patients – who have not been able to speak for years – to answer “yes” or “no” to questions by detecting telltale patterns in their brain activity.

Three women and one man, aged 24 to 76, were trained to use the system more than a year after they were diagnosed with completely locked-in system, or CLIS. The condition was brought on by amyotrophic lateral sclerosis, or ALS, a progressive neurodegenerative disease which leaves people totally paralysed but still aware and able to think.

“It’s the first sign that completely locked-in syndrome may be abolished forever, because with all of these patients, we can now ask them the most critical questions in life,” said Niels Birbaumer, a neuroscientist who led the research at the University of Tübingen.

Activation of a “yes” response vs. a “no” response in both the hemisphere of the brain. The blue colour indicates a lesser concentration of oxy-hemoglobin and red colour indicates and increased concentration.
Activation of a “yes” response vs. a “no” response in both the hemisphere of the brain. The blue colour indicates a lesser concentration of oxy-hemoglobin and red indicates and increased concentration. Illustration: Wyss Center,

“This is the first time we’ve been able to establish reliable communication with these patients and I think that is important for them and their families,” he added. “I can say that after 30 years of trying to achieve this, it was one of the most satisfying moments of my life when it worked.”

All of the patients, who are fed through tubes and kept alive on ventilators, are cared for at home by family members. To train the patients on the system, doctors asked them to think “yes” or “no” in response to a series of simple questions, such as “Your husband’s name is Joachim” and “Berlin is the capital of France.”

During the sessions, the patients wore a cap that uses infrared light to spot variations in blood flow in different regions of the brain. As they answered the questions, a computer hooked up to the cap learned to distinguish the blood flow patterns for “yes” and “no” in each patient.

When the patients scored at least 70% on the training questions, the doctors moved on to more personal questions. The most important of these were about quality of life. Perhaps unexpectedly, all four patients indicated that they were “happy” with life, suggesting that locked-in syndrome might not be the living hell many presume it to be.

But while the renewed ability to communicate with the world was a boon for the patients and their carers, not all of the answers went down well. One patient was a 61-year-old man whose 26-year-old daughter asked whether she should marry her boyfriend, “Mario”. Her father said “no” nine times out of ten. “She went ahead anyway,” Birbaumer told the Guardian. He did not ask if she regretted ever posing the question.

The findings, reported in the journal Plos Biology, do not mean that all locked-in patients are content with their lives. The four patients involved in the study had all chosen to be kept alive on a ventilator once their own breathing had failed, a decision that suggested they did not wish to die. Only a small percentage of CLIS patients who are moved on to ventilators survive the transition. “We have never had a patient who survived outside family care,” Birbaumer said.

But patients with locked-in syndrome have reported a good quality of life before, even matching that of healthy people of the same age. Birbaumer said the reasons are unclear, but he wonders if patients become focused on the good social interactions around them, and even experience something akin to a state of meditation because they cannot feel or move their bodies. “We find that they see life in a more positive way,” he said.

For his next project, Birbaumer wants to build a system that allows patients to communicate more proactively, rather than simply answer questions. In the 1990s, the French journalist, Jean-Dominique Bauby, who became locked-in after a massive stroke, dictated his bestselling memoir, The Diving Bell and the Butterfly, by blinking his left eye to select letters from the alphabet. Birbaumer believes a system that reads brain activity could achieve the same ends for completely locked-in patients who cannot even move their eyelids.

Adrian Owen, a neuroscientist at the University of Western Ontario, has been exploring whether the same technology, known as functional near-infrared spectroscopy, or fNIRS, can be used to communicate with other kinds of brain-injured patients, including those who are presumed to be in a vegetative state. “The results of this study suggest that we are on the right track,” he said.

“Finding a portable, cost-effective and reliable means for communicating with patients who are entirely physically non-responsive is the holy grail for those of us working in this field. If these findings can be replicated in a larger group of patients they suggest that fNIRS may be the answer.

“One of the most surprising outcomes of this study is that these patients reported being ‘happy’ despite being physically locked-in and incapable of expressing themselves on a day-to-day basis, suggesting that our preconceived notions about what we might think if the worst was to happen are false. Indeed, previous research has shown that most locked-in patients are actually reasonably satisfied with their quality of life,” he added.

Since you’re here…

…we have a small favour to ask. More people are reading the Guardian than ever but far fewer are paying for it. And advertising revenues across the media are falling fast. So you can see why we need to ask for your help. The Guardian’s independent, investigative journalism takes a lot of time, money and hard work to produce. But we do it because we believe our perspective matters – because it might well be your perspective, too.

Researchers Create World’s First Heat-Driven Transistor

Researchers at Linköping University in Sweden have created what they say is a heat-gated organic transistor.

Heat-driven transistor. Image credit: Linköping University.

“We demonstrated for the first time that heat signal can act as input for logic circuits, opening up new possibility to couple electrical conductivity with thermoelectricity, in the new field of thermoelectronics,” said Linköping University Professor Xavier Crispin and co-authors.

The team’s heat-gated transistor consists of an electrolyte-gated transistor and an ionic thermoelectric supercapacitor.

In the capacitor, heat is converted to electricity, which can then be stored in the capacitor until it is needed.

Prof. Crispin and his colleagues searched among conducting polymers and produced a liquid electrolyte with a 100 times greater ability to convert a temperature gradient to electric voltage than the electrolytes previously used.

The liquid electrolyte consists of ions and conducting polymer molecules. The positively charged ions are small and move rapidly, while the negatively charged polymer molecules are large and heavy. When one side is heated, the ions move rapidly towards the cold side and a voltage difference arises.

According to the team, the heat-gated transistor opens the possibility of many new applications such as detecting small temperature differences, and using functional medical dressings in which the healing process can be monitored.

It is also possible to produce circuits controlled by the heat present in infrared light, for use in heat cameras and other applications.

The high sensitivity to heat — a hundred times greater than traditional thermoelectric materials — means that a single connector from the heat-sensitive electrolyte, which acts as sensor, to the transistor circuit is sufficient.

One sensor can be combined with one transistor to create a ‘smart pixel.’ A matrix of smart pixels can then be used, for example, instead of the sensors that are currently used to detect infrared radiation in heat cameras.

The research was published Jan. 31, 2017 in the journal Nature Communications.

Tesla may be working on computers that can be implanted into the BRAIN: Elon Musk to reveal plans ‘next month’

  • Elon Musk replied to a tweet about neural lace, saying ‘maybe next month’
  • He has previously said a lifestyle where ‘we are the AI’ would be best outcome
  • This would prevent the rise of an ‘evil dictator AI’ as anyone could take part 


In the hope of creating a ‘human-AI’ cyborgs, Elon Musk has revealed that Tesla may be working on computers that can be implanted into people’s brains.

The astonishing revelation came in response to a tweet, asking Musk if he was working on ‘neural lace’ – a way of installing computers into the human brain.

It is not known what the brain chip could be used for, but Musk has previously said that it will be the ‘thing that really matters for humanity to achieve symbiosis with machines.’

Scroll down for video 

Elon Musk hinted that Tesla may be working on computers that can be implanted into the brain (stock image), although he has not expanded on what the chips could be used for

Elon Musk hinted that Tesla may be working on computers that can be implanted into the brain (stock image), although he has not expanded on what the chips could be used for

Revol Devoleb, a self-proclaimed technology enthusiast from Finland, tweeted to Musk last week, asking: ‘What about neural lace? Announcement soon?’

Musk remained elusive in response, saying: ‘Maybe next month.’

This isn’t the first time that Musk has hinted that Tesla may be working on artificial intelligence.

In a recent interview with Y Combinator, Musk explained that the ‘best outcome’ between humankind and machines would be a collective lifestyle where ‘we are the AI.’

Such a scenario would stamp out the possibility of an ‘evil dictator AI,’ Musk said, allowing anyone who wants to take part to become an ‘AI-human symbiote.’ 

Musk likened the situation to the cooperation of the limbic system and the cortex in the human brain.

 In a recent interview with Y Combinator, Musk explained that the ‘best outcome’ between humankind and machines would be a collective lifestyle where ‘we are the AI.’

 In a recent interview with Y Combinator, Musk explained that the ‘best outcome’ between humankind and machines would be a collective lifestyle where ‘we are the AI.’


Last summer, when asked at the Code Conference in southern California if the answer to the question of whether we are in a simulated computer game was ‘yes’, Elon Musk said the answer is ‘probably’.

Musk believes that computer game technology, particularly virtual reality, is already approaching a point that it is indistinguishable from reality.

‘If you assume any rate of improvement at all, then the games will become indistinguishable from reality, just indistinguishable,’ he said.

‘Even if the speed of those advancements dropped by 1000,

‘We are clearly on a trajectory to have games indistinguishable from reality, and there would be billions of there.

‘It would seem to follow that the odds that we’re in ‘base reality’ is one in billions’, Mr Musk said.

In the interview, he explained that these two systems – the primitive brain that controls your instincts, and the ‘thinking part,’ respectively – work well together, and it would extremely unusual to find someone who wished to get rid of one of them.

Building off of this, he told Y Combinator, ‘I think if we can effectively merge with AI, like improving the neural link between the cortex and your digital extension of yourself, which already exists but just has a bandwidth issue, then effectively, you become an AI-human symbiote.’

This would also solve the ‘control problem,’ he went on to explain, as it could become so widespread that ‘anyone who wants it can have it.’

‘We don’t have to worry about some evil dictator AI,’ Musk told Y Combinator, ‘because we are the AI collectively.

‘That seems like the best outcome I can think of.’

In November, Elon Musk predicted that the rise of machines in the workplace could soon mean job displacement and a ‘universal basic income’ for humans.

The billionaire explained that our options may be limited in the future as automation becomes the norm, and this could even leave people with more time to enjoy their lives.

Musk said humans will eventually need to achieve symbiosis with ‘digital super-intelligence’ in order to cope with the advancing world – but, he warns doing this might be the toughest challenge of all. 

In an interview with CNBC, the CEO of Tesla, SolarCity, and SpaceX said certain jobs, like truck driving, may soon be lost to automated technologies.

And with machines taking over the workforce, human income would shift as well, potentially necessitating universal payments from the government.

‘There is a pretty good chance we end up with a universal basic income, or something like that, due to automation,’ Musk told CNBC.

‘I’m not sure what else one would do with this.

‘I think that’s what would happen.’

Musk went on to explain that some people may have plans to do more ‘complex’ and ‘more interesting’ things with these capabilities in the future.

This will open the door for more leisure time, he said.

Machines equipped with artificial intelligence are ever creeping into the workforce, and for humans, this could soon mean job displacement and a ‘universal basic income,’ according to Elon Musk (pictured) 

Machines equipped with artificial intelligence are ever creeping into the workforce, and for humans, this could soon mean job displacement and a ‘universal basic income,’ according to Elon Musk (pictured)

‘And then we have to figure out how we integrate with a world in the future with advanced AI,’ Musk told CNBC, noting that this will likely be the ‘toughest’ part.

‘Ultimately,’ he said, ‘it would need to be some kind of improved symbiosis with digital super-intelligence.’

The Tesla CEO pointed to the example of the potential future capabilities of semi-trailer trucks.

One day, these trucks may not require drivers, and could instead operate autonomously while a human oversees an entire fleet.

When a problem arises, the fleet operator could take over remotely.

Space X Founder Elon Musk: AI is our ‘biggest existential threat’

The iPhone 8 has been making headlines ever since its predecessor, the iPhone 7, made its debut. With the upcoming flagship widely considered as Apple’s first real effort in years at innovating and upgrading its most iconic product, the expectations for the iPhone 8 are very high. If rumors are to be believed, however, it appears that Apple would not only meet the market’s expectations for its 2017 halo device, but it would also far exceed it.

The past few weeks have been rife with leaks and other details about the iPhone 8. The iPhone 8’s release date and specs, for one, continue to emerge from numerous analysts and reports in a steady stream. While the device’s official release date has not been released by Apple, analysts are quite unanimous in the idea that the Cupertino-based tech giant would release the iPhone 8 sometime in September of 2017.

The tech giant has followed a September release date for its flagship smartphone ever since the debut of the iPhone 5 back in 2012. Thus, there is a good chance that the tech giant would release the 10th-anniversary iPhone this September as well. With the iPhone 8’s release date all but confirmed for later this year, speculations about the upcoming flagship’s specs and features have also begun to consistently emerge.

One thing that the iPhone 8 is rumored to feature is a brand new display that goes far beyond what current flagships in the mobile industry offer. The iPhone 4 was a revolutionary device during its time due to its groundbreaking Retina display. However, over the years, Apple’s rivals, such as Samsung and Sony, have released flagship smartphones with displays that far exceed Apple’s Retina resolution. Currently, the iPhone 7 and iPhone 7 Plus’ screens are downright obsolete when compared to the resolution of its main rivals such as the Galaxy S7 Edge and the LG G5.

This, however, seems set to change with the iPhone 8. According to a PC Advisor report, Apple has been asking its suppliers to create a display that would feature “better screen resolution than ones from Samsung.” Considering that Samsung’s current flagships utilize a Quad HD screen, there are only two resolution options that Apple could be gunning for – Quad Full HD (QFHD) or straight-up 4K.

Rumors about the Galaxy S8 already point to the Samsung flagship being equipped with a display that exceeds its current resolution. With this in mind, there is a pretty good chance that the iPhone 8 and the Galaxy S8 would both feature 4K or at least QFHD displays. If these rumors prove true, 2017 might very well be the year when mainstream mobile devices breach the 4K barrier.

Another notable feature that is rumored for the iPhone 8 is a special laser sensor for gesture recognition. While the technology is not really new, Apple’s version of gesture commands might very well be extremely creative. Although the laser sensors have not been confirmed by the tech giant, rumors about the feature do go in line with speculations pointing to embedded sensors on the iPhone 8’s display. If any, these new rumored features all but confirm the notion that the iPhone 8 would feature a borderless screen that is riddled with holes for the phone’s camera, speakers and other sensors.

The iPhone 8 is rumored to feature long-range wireless charging.
[Image by Pixabay]

Perhaps the iPhone 8’s true killer feature, however, would be its wireless charging capabilities. Over the last few months, numerous details have emerged stating that the iPhone 8 would feature long-range wireless charging as a result of its partnership with Energous, a firm dedicated to the development of wireless charging technologies, according to a report from the Express. Energous has previously developed WattUp RF, an innovative wireless charging solution that is capable of providing power to devices in a 15-foot radius.

If Apple does indeed feature Energous’ technology with the iPhone 8, it would make the device far more advanced than other wirelessly-charging flagships such as the Galaxy S7 Edge, which could only charge wirelessly over distances of 1.6 inches. Wireless charging is not really anything new, but Apple’s approach to the innovation would most likely make it a game-changer just the same.

While most speculations about the iPhone 8 are extremely positive, there is one particular rumor that has managed to get numerous Apple fans apprehensive. Recent reports suggest that this year would feature a three-pronged flagship debut, with Apple releasing the iPhone 8, iPhone 7S, and iPhone 7S Plus in September. With this in mind, there is a pretty good chance that the iPhone 8, the company’s halo device, would be priced at a premium. Thus, while the iPhone 8 would be revolutionary, its price point might simply be far too high to be attainable for most consumers.

Then again, it appears that Apple is really working very hard to make sure that the iPhone 8 would be its best device yet. With the smartphone being hyped as yet another possible game changer in the mobile market, there is a very good chance that the device would be Apple’s most successful iPhone yet.

 The iPhone 8 is speculated to feature a QFHD or 4K display.

ASUS Unveils Tinker Board, A Raspberry Pi Competitor With More Horsepower And 4K Video Support

If there’s one thing the passionate and growing DIY single board computer community would ask for from the Raspberry Pi, it’s more performance and connectivity features in the same amazingly tiny package. Tech giant Asus Computer is attempting to answer that call with their own ARM core based mini PC called the Tinker Board. Though the model number is actually, the Asus 90MB0QY1-M0EAY0, branding it Tinker Board not only is a lot easier to remember but Asus smartly calls upon exactly the demographic the product is intended for: makers and tinkerers looking for a tiny slice of all-in-one PC goodness for fun projects, media servers, embedded applications or perhaps building your own a retro NES Mini alternative, if you weren’t able to score one over the holidays.


Where the Raspberry Pi 3 is powered by a 64-bit ARM Cortex-A57 based Broadcom BCM2837 quad-core processor at 1.2GHz, the Asus Tinker Board is instead powered by a 32-bit ARM Cortex-A17 based Rockchip RK3288 quad-core at 1.8GHz. Asus claims its almost twice as fast as the Raspberry Pi 3 Model B and in addition, the Tinker Board sports 2GB of RAM versus the RPI 3’s 1GB configuration. Other advantages for the Tinker Board include full H.264 4K video decode capability in hardware, bolstered by stronger graphics performance as well, via the Rockchip RK3288’s ARM Mali-T764 graphics core. The Asus mini computer also supports 192K/24bit audio sample rates over the RPI 3’s 48K/16-bit offering and it also has an integrated full speed Gigabit Ethernet port, a significant boost over the Raspberry Pi’s 100Mb LAN. Other integrated connectivity options for the Tinker Board are 802.11b/g/n WiFi and Bluetooth 4, which is similar to what’s available with the Raspberry Pi 3 Model B.

HotHardware has a detailed breakdown of the specs and comparisonsbut the tiny Asus computer also officially supports Debian Linux and KODI (formerly XBMC or Xbox Media Center) with its slick media streaming interface. The Asus Tinker Board is expected to ship early next month for around the $69 price point (not confirmed) and though you can pre-order now on at least one European distributor site, I’d suggest keeping an eye on Amazon for a listing any day now.

You can also follow my work on TwitterFacebook and YouTube but most importantly at

SoundHound Raises $75 Million to Take on Amazon, Google in AI

SoundHound Inc., known for its music recognition app, raised $75 million to compete with the likes of Inc. and Google to build artificial intelligence that helps machines understand human voices.

The 12-year-old startup is betting that as more everyday devices get connected to the internet, using speech to control and direct them will become the dominant form of interaction. The company aims to encourage device makers to use voice AI tools offered by SoundHound rather than try to build their own.

Santa Clara, California-based SoundHound is one of only a few companies that has built from scratch a core AI technology that can identify and interpret audio. Most of the others that have their own speech-recognition engines are big names, including Apple Inc., Baidu Inc., and Microsoft Corp. And many of them tightly control how the software can be used and the data that’s generated, said SoundHound Chief Executive Officer Keyvan Mohajer.

“We don’t have an agenda to hijack your product,” Mohajer said. “If you use Amazon, you lose your brand, your users. You have to ask your user to log into their Amazon account, they have to call on Alexa, and all the data belongs to them.” Meanwhile, when customers build voice-enabled devices or apps using SoundHound’s technology, the startup doesn’t own the users or the data, he said.

The fresh capital will help SoundHound add more customers using its speech AI platform, Houndify, at a faster rate and expand its operations to Asia and Europe. Investors in this round of fundraising included Samsung Electronics Co.’s Catalyst Fund and graphics chip-maker Nvidia Corp. — both of which build hardware technology integral to artificial intelligence and connected devices, the Internet of Things, and already work with SoundHound. Nomura Group, Kleiner, Perkins, Caufield & Byers, and the SharesPost 100 Fund were also among new investors. The company declined to discuss its valuation. Research firm Pitchbook Inc. estimates SoundHound’s valuation to be around $800 million.

Already, SoundHound has integrated its software with Samsung hardware, making it easier for developers to add voice-enabled technology to devices that use the electronics giant’s chips. The Korean company staked a claim in building the future connected-device industry, announcing a plan last year to invest $1.2 billion in the U.S. in IoT projects over the next four years. SoundHound also has worked with Nvidia to combine speech recognition with the chipmaker’s auto infotainment systems. Mohajer expects to work on more projects with his new investors, though declined to provide details.

The end goal for all of these companies is to sell technology and devices that allow people to talk about and ask for anything and have the systems understand and be able to respond accordingly. Part of that challenge is what SoundHound and its rivals in speech are working on: AI that knows the difference between pizza, the food, and Pisa, the town famous for its leaning tower.

This requires the software to have contextual knowledge of the differences. Many companies with data in domains from travel to weather to food have given access to SoundHound’s AI, so that the speech recognition can simultaneously tap into all these different data sources for knowledge on what the person is saying — what SoundHound is calling its Collective AI. Being able to get access to information from companies like Uber and Yelp Inc. means developers using SoundHound’s software to build voice-enabled products are better at understanding what the user is saying, Mohajer said.SoundHound has also taken a different approach to building its speech AI, where the technology will in real-time identify the words and work on deciphering the context, which Mohajer said provides faster results. Most other speech and language interpretation technologies take a piecemeal approach where the software figures out the words from the audio and then deciphers the meaning.

Based on the description of the technology, it’s likely that SoundHound’s approach uses incremental recognition, where the software doesn’t wait until the user stops talking to try and interpret the words, said Alexander Rudnicky, a research professor at Carnegie Mellon University, though he added that he can’t be sure of exactly how SoundHound’s AI works.

“If you want to give people something as fast as possible, then the right idea is to do this incremental approach,” he said. But optimizing for speed in this way may result in a speech and language interpretation system that struggles with certain use cases, he said.