http://www.theglobeandmail.com/technology/google-opts-for-human-touch-in-fight-against-fake-news/article34803437/

Google opts for human touch in fight against fake news

Every year, Google.com makes hundreds of changes to improve the computer code of its search engine, but in an attempt to combat the scourge of fake news and offensive content, its engineers are beginning to collect data from a new source: humans.

“It’s become very apparent that a small set of queries in our daily traffic (around 0.25 per cent), have been returning offensive or clearly misleading content,” writes Ben Gomes, vice-president of engineering for Google, in a blog post outlining some policy changes that will seek more user feedback in an effort to clean up some of the scandals related to automatically generated sections of its search results.

Google’s troubles with offensive content have been popping up with more frequency in recent months. In October, 2016, users noticed Google would sometimes autocomplete the phrase “are jews …” with the word “evil.” After a public outcry the company made changes to remove the offending lines, adding more scrutiny in its algorithm to so-called “sensitive” topics. But even after the fixes, its search engine still regularly turns up offensive results.

For example, right now, users who type “are black” into a Google search bar might see autocomplete suggestions such as “are black people smart,” which leads to a search page topped by a story about the offensiveness of that autocomplete suggestion, followed by a Fox News article claiming a DNA connection to intelligence and a fourth article with the headline: “Black people aren’t human.” That last article is from an organization called National Vanguard, which is identified as a U.S. neo-Nazi white nationalist splinter group by the Southern Poverty Law Centre.

To combat the problem, Google is giving regular users a new “report” button on its search-bar autocomplete feature so people can more easily alert Google to problematic results. A similar button will be added to the “featured snippets” section of its results pages. Autocomplete and featured snippets – previews of search results – have both been the subject of controversies that involved the promotion of conspiracy theories, fake news and racist slurs on the hugely popular website.

After Tuesday, a user who spots an offensive autocomplete result will be able to flag it for Google’s engineers to review.

But even these high-profile anecdotes don’t capture the scale of the problem Google faces. The company doesn’t say how many searches a day it processes; it simply says it processes “trillions” of search requests a year. So while one-quarter of 1 per cent of bad content might be a good result for almost any other enterprise, Google could be responding to many billions of user requests a year with these “low quality” results. Small for Google is still a potential avalanche of unwelcome content for users.

Mr. Gomes explained that content promoting hate is also being given the lowest possible search weighting, and an increased importance will be given to “high-quality” sources of information, particularly on sensitive topics. The process of sifting through search results involves a mix of algorithm and human-curation efforts.

For instance, Google has seen posts containing Holocaust-denying falsehoods ranking high in its searches – an absurd condition when there is excellent scholarship and documentation of the horrors of the Holocaust available online.

https://mspoweruser.com/microsoft-brings-support-for-android-for-work-apps-to-its-custom-android-launcher/

Microsoft brings support for Android for Work apps to its custom Android launcher

Microsoft is currently testing yet another major update for its Arrow Launcher on Android. The company today released a new update for its custom Android launcher to users part of the beta program, bringing a number of new features. Most notably, the launcher now supports Android for Work apps. For those unfamiliar, Android for Work effectively separates the business apps from the personal apps on your phone. For instance, you’ll see two different versions of Gmail which will separate the content from your work account and your personal account. Android for Work also allows your employer to deploy apps to your device, which means Arrow Launcher will automatically show new apps whenever they are deployed by your organization.

There’s also a new Notes card for the widgets section in Arrow Launcher where users can jot down their thoughts. Arrow Launcher’s new Notes card is similar to how Windows’ Sticky Notes feature that basically allows you to take notes on the launcher and the notes are also easily accessible from the home screen. You can also create lists within a note, or add an image if you want which is pretty neat.

Other notable new additions in the latest Arrow Launcher update includes QR detection and voice dictation for the Search feature, and the ability to create app shortcuts. Here’s the full list of new features:

  • Notes! New card that allows you to easily jot down some notes, with image support.
  • Search function now includes voice and QR.
  • Android for Work apps support.
  • New feature: create app shortcuts.
  • Password protection option for hidden apps — more secure than ever.
  • Bug fixes and performance enhancements.

Arrow Launcher’s new features will arrive for all users sometime soon, but if you are part of the beta testing program, you can grab the latest update from the Google Play Store here.

http://www.kurzweilai.net/the-first-2d-microprocessor-based-on-a-layer-of-just-3-atoms

The first 2D microprocessor — based on a layer of just 3 atoms

May one day replace traditional microprocessor chips as well as open up new applications in flexible electronics
April 24, 2017
[+]

Overview of the entire chip. AC = Accumulator, internal buffer; PC = Program Counter, points at the next instruction to be executed; IR = Instruction Register, used to buffer data- and instruction-bits received from the external memory; CU = Control Unit, orchestrates the other units according to the instruction to be executed; OR = Output Register, memory used to buffer output-data; ALU = Arithmetic Logic Unit, does the actual calculations. (credit: TU Wien)

Researchers at Vienna University of Technology (known as TU Wien) in Vienna, Austria, have developed the world’s first two-dimensional microprocessor — the most complex 2D circuitry so far. Microprocessors based on atomically thin 2D materials promise to one day replace traditional microprocessors as well as open up new applications in flexible electronics.

Consisting of 115 transistors, the microprocessor can run, simple user-defined programs stored in an external memory, perform logical operations, and communicate with peripheral devices. The microprocessor is based on molybdenumdisulphide (MoS2), a three-atoms-thick 2D semiconductor transistor layer consisting of molybdenum and sulphur atoms, with a surface area of around 0.6 square millimeters.

Schematic drawing of an inverter (“NOT” logic) circuit (top) and an individual MoS2 transistor (bottom) (credit: Stefan Wachter et al./Nature Communications)

For demonstration purposes, the microprocessor is currently a 1-bit design, but it’s scalable to a multi-bit design using industrial fabrication methods, says Thomas Mueller, PhD., team leader and senior author of an open-access paper on the research published in Nature Communications.*

New sensors and flexible displays

Two-dimensional materials are flexible, making future 2D microprocessors and other integrated circuits ideal for uses such as medical sensors and flexible displays. They promise to extend computing to the atomic level, as silicon reaches its physical limits.

However, to date, it has only been possible to produce individual 2D digital components using a few transistors. The first 2D MoS2 transistor with a working 1-nanometer (nm) gate was created in October 2016 by a team led by LawrenceBerkeley National Laboratory (Berkeley Lab) scientists, as KurzweilAI reported.

Mueller said much more powerful and complex circuits with thousands or even millions of transistors will be required for this technology to have practical applications. Reproducibility continues to be one of the biggest challenges currently being faced within this field of research, along with the yield in the production of the transistors used, he explained.

* “We also gave careful consideration to the dimensions of the individual transistors,” explains Mueller. “The exact relationships between the transistor geometries within a basic circuit component are a critical factor in being able to create and cascade more complex units. … the major challenge that we faced during device fabrication is yield. Although the yield for subunits was high (for example, ∼80% of ALUs were fully functional), the sheer complexity of the full system, together with the non-fault tolerant design, resulted in an overall yield of only a few per cent of fully functional devices. Imperfections of the MoS2 film, mainly caused by the transfer from the growth to the target substrate, were identified as main source for device failure. However, as no metal catalyst is required for the synthesis of TMD films, direct growth on the target substrate is a promising route to improve yield.


Abstract of A microprocessor based on a two-dimensional semiconductor

The advent of microcomputers in the 1970s has dramatically changed our society. Since then, microprocessors have been made almost exclusively from silicon, but the ever-increasing demand for higher integration density and speed, lower power consumption and better integrability with everyday goods has prompted the search for alternatives. Germanium and III–V compound semiconductors are being considered promising candidates for future high-performance processor generations and chips based on thin-film plastic technology or carbon nanotubes could allow for embedding electronic intelligence into arbitrary objects for the Internet-of-Things. Here, we present a 1-bit implementation of a microprocessor using a two-dimensional semiconductor—molybdenum disulfide. The device can execute user-defined programs stored in an external memory, perform logical operations and communicate with its periphery. Our 1-bit design is readily scalable to multi-bit data. The device consists of 115 transistors and constitutes the most complex circuitry so far made from a two-dimensional material.

http://news.ubc.ca/2017/04/25/ubc-researchers-call-for-suspension-of-site-c-project/

Report calls for suspension of Site C project

Journal of Commerce highlighted a report co-authored by Karen Bakker, the Canada Research Chair in political ecology and director of UBC’s program on water governance, on the Site C dam project.

The report calls for the project to be suspended as it has become “uneconomic.”

The report was also mentioned in a Vancouver Sun opinion piece.

http://www.livescience.com/58806-soda-linked-to-memory-problems-strokes-dementia.html

Gulp! Soda Linked to Memory Woes, Strokes and Dementia

People who often drink soda, with sugar or without it, may be more likely to develop memory problems and have smaller brain volumes, according to two recent studies.

In one study, researchers found that people who drank diet soda every day were three times more likely to have a stroke or develop dementia over 10 years than those who did not consume any diet soda.

In the second study, the same researchers concluded that people who consumed at least one diet soda a day had smaller brain volumes than those who did not drink any diet soda. Moreover, that same study found that people who consumed more than two sugary beverages such as soda or fruit juice a day had smaller brain volumes and worse memory function that those who did not consume any such beverages. [7 Biggest Diet Myths]

Although both studies show that there is a link between drinking diet or sugary beverages and certain health outcomes, the results do not mean that consuming such beverages directly causes these outcomes, said the lead author of both studies, Matthew P. Pase, a neurology researcher at Boston University School of Medicine.

In the first study, published April 20 in the journal Stroke, the researchers interviewed about 4,300 people, ages 45 and older, three times over seven years, and asked them whether they drank any diet or sugary beverages. Then, toward the end of the seven-year period, the scientists began to monitor the study participants’ health for cases of stroke and dementia, and continued to do so for the next 10 years. During this period, 97 people had a stroke and 81 people developed dementia — a number that included 63 cases of Alzheimer’s disease.

The researchers found that the daily consumption of diet beverages, but not sugary beverages, was linked to a higher risk of stroke and dementia over the 10-year period. The reasons behind these findings are not clear, but previous research had linked the consumption of diet drinks with obesity and diabetes, which might also be linked to with poor blood circulation, Pase said. Problems with circulation may contribute to a person’s risk of stroke or dementia because the brain relies on a constant supply of blood to function well, he said.

The findings of this study suggest that turning to diet beverages in the hope of avoiding extra calories from sugary drinks may not be a good idea, said Dr. Paul Wright, chairman of neurology at North Shore University Hospital in Manhasset, New York, who was not involved in the study. “The right direction to go in is to have plain water,” or other beverages that do not contain artificial sweeteners, he told Live Science. [7 Foods You Can Overdose On]

In the second study, published in March in the journal Alzheimer’s & Dementia, the researchers looked at brain scans and results of cognitive tests conducted in about 4,000 people. The scientists also asked the study participants if they consumed any diet or sugary beverages, and, if so, how much.

The data revealed a link between the consumption of both diet and sugary beverages and smaller brain volumes. Moreover, the researchers found a link between the consumption of sugary beverages and poorer memory. All of those outcomes are risk factors for Alzheimer’s disease, the researchers said.

As with the first study, the mechanisms that might underlie the link between the consumption of sugary beverages and these outcomes are unclear, Pase told Live Science. However, previous research has linked high sugar intake with diabetes and high blood pressure — conditions linked to compromised blood circulation that may ultimately affect brain health, he said.

Originally published on Live Science.

http://www.theaustralian.com.au/business/companies/apple-chief-lambasted-uber-boss-over-privacy/news-story/e70e39e5596a1c728b565176fdcf2955

Apple chief lambasted Uber boss over privacy

Tim Cook warned Uber’s CEO that they company would be blocked by the App Store if it did not stop flouting privacy rules. (AFP PHOTO/Andrew Caballero-Reynolds)

Uber’s chief executive was summoned by Tim Cook, the head of Apple, and told that the taxi-hailing business would be blocked by the App Store if it did not stop flouting privacy rules.

The dressing down took place two years ago, but the details emerged only at the weekend after sources spoke to The New York Times.

They said that Mr Cook had told Travis Kalanick, Uber’s co-founder and chief executive — dressed, apparently, in his trademark red trainers and pink socks — to stop “fingerprinting” users’ handsets without their knowledge.

Apple had discovered that Uber was covertly tagging people’s phones in a way that meant the company could still identify them even if the customer had deleted the app. Denial of access to millions of iPhone customers would have destroyed Uber’s business, so Mr Kalanick acceded to Apple’s demands.

Uber said that the tagging, which is permitted by Google’s Android platform, was intended to prevent fraud, where people load Uber on to stolen phones or take expensive rides and then delete the app. It did not enable Uber to track former customers’ movements, but would alert the company if known miscreants signed up again.

Nevertheless, the technology violates Apple’s privacy rules, which state that if a customer deletes an app, the provider’s presence should be wiped from their handset. Uber was aware of the rules, but the newspaper’s sources said that Mr Kalanick had told engineers to “geo-fence” Apple HQ so that people reviewing Uber’s software from that specific location would be unable to see the fingerprinting.

This did not work because engineers outside Apple’s head office in Cupertino, California, cottoned on to the company’s deception.

Uber has been accused repeatedly of breaking the rules. Three years ago it was caught using its so-called god technology to track customers’ movements in real time without their consent. It insisted that it had robust policies to prohibit staff from accessing journey data, but former employees claimed recently that staff continued to track celebrities and personal acquaintances, including former partners. Uber denies the allegations.

This month it was reported that Uber had created a secret software system called “Hell” to track drivers from Lyft, a rival cab-hailing service, so that it could deploy extra drivers in the areas in which they operated. The Hell software was allegedly also used to poach drivers from Lyft. Uber declined to comment on the reports.

The company has also been accused of operating a “macho” work culture, an image compounded in February when secretly filmed footage of Mr Kalanick shouting at an Uber driver was leaked online. He now has a private driver.

The Times

https://techcrunch.com/2017/04/24/ai-report-fed-by-deepmind-amazon-uber-urges-greater-access-to-public-sector-datasets/

AI report fed by DeepMind, Amazon, Uber urges greater access to public sector data sets

What are tech titans Google, Amazon and Uber agitating for to further the march of machine learning technology and ultimately inject more fuel in the engines of their own dominant platforms? Unsurprisingly, they’re after access to data. Lots and lots of data.

Specifically, they’re pushing for free and liberal access to publicly funded data — urging that this type of data continue to be “open by default,” and structured in a way that supports “wider use of research data.” After all, why pay to acquire data when there are vast troves of publicly funded information ripe to be squeezed for fresh economic gain?

Other items on this machine learning advancement wish-list include new open standards for data (including metadata); research study design that has the “broadest consents that are ethically possible,” and a stated desire to rethink the notion of “consent” as a core plank of good data governance — to grease the pipe in favor of data access and make data holdings “fit for purpose” in the AI age.

These suggestions come in a 125-page report published today by the Royal Society, aka the U.K.’s national academy of science, ostensibly aimed at fostering an environment where machine learning technology can flourish in order to unlock mooted productivity gains and economic benefits — albeit the question of who, ultimately, benefits as more and more data gets squeezed to give up its precious insights is the overarching theme and unanswered question here. (Though the supportive presence of voices from three of tech’s most powerful machine learning deploying platform giants suggests one answer.)

Scramble for public data

The report, entitled Machine learning: the power and promise of computers that learn by example, is the work of the Royal Society’s working group on machine learning, whose 15-strong membership includes employees of three companies currently deploying machine learning at scale: Demis Hassabis, the founder and CEO of Google DeepMind, along with DeepMind’s research scientist, Yee Whye Teh; Neil Lawrence, Amazon’s director of machine learning; and Zoubin Ghahramani, chief scientist at Uber.

The report’s top-line recommendations boil down the more fleshed out concerns in the meat of its chapters, and end up foregrounding encouragement at greater length than concern, as you might expect from a science academy — though the level of concern contained within its pages is notable nonetheless.

The report recommendations laud what is described as the U.K.’s “good progress” in increasing the accessibility of public sector data, urging “continued effort” towards “a new wave of ‘open data for machine learning’ by government to enhance the availability and usability of public sector data,” and calling for the government to “explore ways of catalysing the safe and rapid delivery” of new open standards for data which “reflect the needs of machine-driven analytical approaches.”

But an early glancing reference to “the value of strategic datasets” does get unpacked in more detail further into the report — with the recognition that early access to such valuable troves of publicly funded data could lock in commercial advantage. (Though you won’t find a single use of the word “monopoly” in the entire document.)

“It is necessary to recognise the value of some public sector data. While making such data open can bring benefits, considering how those benefits are distributed is important,” they write. “As machine learning becomes a more significant force, the ability to access data becomes more important, and those with access can attain a ‘first mover feedback’ advantage that can be significant. When there is such value at stake, it will be increasingly necessary to manage significant datasets or data sources strategically.”

There is no example of this kind of “first mover feedback advantage” set out in the report, but you could point to DeepMind’s data access partnerships with the U.K.’s National Health Service as a pertinent case study here. Not least as the original data-sharing arrangement that the Google-owned company reached with the Royal Free NHS Trust in London is controversial, having been agreed without patient knowledge or consent, and having scaled significantly in scope from its launch as a starter app hosting an NHS algorithm to (now) an ambitious plan to build a patient data API to broker third-party app makers’ access to NHS data. Also relevant, but unmentioned: the original DeepMind-Royal Free data-sharing agreement remains under investigation by U.K. data protection watchdogs.

Instead, the report flags up the value of NHS data — describing it as “one of the UK’s key data assets” — before going on to frame the notion of third-party access to U.K. citizens’ medical records as a case of “personal privacy vs public good,” suggesting that “appropriately controlled access mechanisms” could be developed to resolve what it dubs this “balancing act” (again, doing so without mentioning that DeepMind has already set itself the self-appointed task of developing a controlled access mechanism).

“If this balancing act is resolved, and if appropriately controlled access mechanisms can be developed, then there is huge potential for NHS data to be used in ways that will both improve the functioning of the NHS and improve healthcare delivery,” they write.

Yet exactly who stands to benefit economically from unlocking valuable healthcare insights from a publicly funded NHS is not discussed. Though common sense would tell you that Google/DeepMind believes there is a profitable business to be built off of free access to millions of NHS patient’s health data and the first mover advantage that gives them — including the chance to embed themselves into healthcare service delivery via control of an access infrastructure.

In an accompanying summary to the report, a pullquote from another member of the working group, Hermann Hauser, co-founder of Amadeus Capital Partners, talks excitedly about potential transformative opportunities for businesses making use of machine learning tech. “There are exciting opportunities for machine learning in business, and it will be an important tool to help organisations make use of their — and other — data,” he is quoted as saying. “To achieve these potentially significant economic benefits, businesses will need to be able to access the right skills at different levels.”

The phrase “economic benefits” is at least mentioned here. But the raison d’etre of investors is to achieve a good exit. And there has been a rash of exits of machine learning firms to big tech giants engaged in the war for AI talent. DeepMind selling to Google for more than $500 million in 2014 being just one example. So investors have their own dog in the fight for a less stringent public sector data governance regime — and still get to cash out if an AI startup they bet on sells to a tech giant, rather than scales into one itself.

Julia Powles, a tech law and policy researcher at Cornell Tech makes short shrift of the notion that lots of entrepreneurs stand to benefit if the public sector data floodgates are opened. “The idea that small guys can make use of their data is just a ruse. It’s only the big that will profit,” she tells TechCrunch.

Seismic shifts

Another portion of the report spends a lot of time apparently concerned with skills — discussing ways the government could encourage “a strong pipeline of practitioners in machine learning,” as it puts it — including urging it to make machine learning a priority area for additional PhD places, and to make near-term funding available for 1,000 extra PhDs (or more). Machine learning PhDs are of course top of the hiring tree for big tech giants that have the most cash to suck up these highly prized recruits, keeping them from being hired by startups, or indeed from starting their own competing businesses. So any increase at the top academic tier will be Google et al’s gain, first and foremost — more so if the public sector also paid to fund these extra PhD places.

The skills discussion (which includes suggestions to tweak school curriculum to include machine learning over the next five years) has to later be weighed against another portion of the report considering the potential impact of AI on jobs. Here the report cannot avoid the conclusion that machine learning will at the very least “change” work — and may well lead to seismic shifts in the employment prospects for large swathes of the workforce, which could also, the authors recognize, increase societal inequality. All of which does rather undermine the earlier suggestion that “everyone” in society will be able to upskill for a machine learning-driven future, given you can’t acquire skills for jobs that don’t exist… So the risk of AI generating a drastically asymmetric wealth and employment outcome is both firmly lodged in the report’s vision of future work — yet also kicked into a no man’s land of collective (i.e. zero ownership) responsibility.

“The potential benefits accruing from machine learning and their possibly significant consequences for employment need active management,” they write. “Without such stewardship, there is a risk that the benefits of machine learning may accrue to a small number of people, with others left behind, or otherwise disadvantaged by changes to society.

“While it is not yet clear how potential changes to the world of work might look, active consideration is needed now about how society can ensure that the increased use of machine learning is not accompanied by increased inequality and increased disaffection amongst certain groups. Thinking about how the benefits of machine learning can be shared by all is a key challenge for all of society.”

Ultimately, the report does call for “urgent consideration” to be given to what it describes as “the ‘careful stewardship’ needed over the next ten years to ensure that the dividends from machine learning… benefit all in UK society.” And it’s true to say, as we’ve said before, that policymakers and regulators do need to step up and start building frameworks and determining rules to ensure machine learning technologists do not have the chance to asset strip the public sector’s crown jewels before they’ve even been valued (not to mention leave future citizens unable to pay for the fancy services that will then be sold back to them, powered by machine learning models freely fatted up on publicly funded data).

But the suggested 10-year time frame seems disingenuous, to put it mildly. With — for instance — very large quantities of sensitive NHS data already flowing from the public sector into the hands of one of the world’s most market capitalized companies (Alphabet/Google/DeepMind) there would seem to be rather more short-term urgency for policymakers to address this issue — not leave it on the back burner for a decade or so. Indeed, parliamentarians have already been urging action on AI-related concerns like algorithmic accountability.

Perception and ethics

Public opinion is understandably a big preoccupation for the report authors — unsurprisingly so, given that a technology that potentially erodes people’s privacy and impacts their jobs risks being drastically unpopular. The Royal Society conducted a public poll on machine learning for the report, and say they found mixed views among Brits. Concerns apparently included “depersonalisation, or machine learning systems replacing valued human experiences; the potential impact of machine learning on employment; the potential for machine learning systems to cause harm, for example accidents in autonomous vehicles; and machine learning systems restricting choice, such as when directing consumers to specific products and services.”

“Ongoing public confidence will be central to realising the benefits that machine learning promises, and continued engagement between machine learning researchers and practitioners and the public will be important as the field develops,” they add.

The report suggests that large-scale machine learning research programs should include funding for “public engagement activities.” So there may at least, in the short term, be jobs for PR/marketing types to put a good spin on the “societal benefits of automation.” They also call for ethics to be taught as part of postgraduate study so that machine learning researchers are given “strong grounding in the broader societal implications of their work.” Which is a timely reminder that most of the machine learning tech already deployed in the wild, including commercially, has probably been engineered and implemented by minds lacking such a strong ethical grounding. (Not that we really need reminding.)

“Society needs to give urgent consideration to the ways in which the benefits from machine learning can be shared across society,” the report concludes. Which is another way of saying that machine learning risks concentrating wealth and power in the hands of a tiny number of massively powerful companies and individuals — at society’s expense. Whichever way you put it, there’s plenty of food for thought here.

 

http://www.ctvnews.ca/health/tele-empathy-device-allows-caregivers-to-really-feel-parkinson-s-symptoms-1.3381948

‘Tele-empathy’ device allows caregivers to really feel Parkinson’s symptoms

A Canadian company has created a device that can offer a glimpse into what it’s like to have Parkinson’s disease so others can better understand the daily frustrations of the debilitating disorder.

Klick Labs’ Sympulse is a first-of-its-kind device that can record the tremors of actual Parkinson’s patients. It can then wirelessly transmit the data to a second device worn by a caregiver, to allow them to truly feel what the patient is feeling.

The device, which resembles a blood pressure cuff, is strapped around the forearm, with a battery and motor pack providing the tremors.

Klick Labs VP Yan Fossat says the point of the “tele-empathy” device is to help those caring for people with the nerve disorder to get a real feel for the condition.

“This is intended to create empathy, to make you feel that having tremors is actually very debilitating; it’s not just a mild inconvenience,” he told CTV’s Your Morning.

“Feeling the disease with your own arm, with your own hand, is the best way to truly understand what it’s like,” he added.

By wearing the device and trying to perform everyday tasks, users can learn how tough it is to such simple things such as buttoning a shirt, slicing a tomato, or signing your name.

The device could be helpful not only for caregivers, but for physicians who often find their sense of empathy erodes after years of working with patients, Fossat says.

“We know that patients do better when doctors have increased empathy; there’s a lot of research that shows that,” he said.

But, he says, even those who struggle to empathize with patients can be shown how to develop the skill.

“Empathy is really hard to learn. Some people are good at it and some people find it hard. You can’t just learn empathy by watching a PowerPoint presentation,” Fossat said.

“This is the first time that technology is helping us learn the skill much faster.”