https://techcrunch.com/2017/11/30/google-assistant-can-now-help-you-find-a-plumber-and-other-local-services/

Google Assistant can now help you find a plumber and other local services

Google Assistant is about to get a bit more home-savvy. The voice assistant will be gaining local discovery skills that will help you locate home services nearby. The companyspecifically detailed that Assistant would be gaining insights to help users locate “nearby services like an electrician, plumber, house cleaner and more.”

Saying something like, “Hey Google, I need a plumber,” will soon help you clarify your problem, pull up results for local services that can help you out of your jam and dial them up for you.

The blog post focused heavily on home services, though the move fits more broadly into Google’s broader strategy of helping Assistant cater results more locally for users. When it comes to real-world stores and services that have yet to be app-disrupted, there’s still a long way for voice assistants to go.

The new functionality will be rolling out to U.S. users this week starting today, and results will be screened in certain cities by Google and services like HomeAdvisor and Porch so you hopefully won’t end up with somebody who doesn’t know what they’re doing. Voice assistants on display-less devices like Home fundamentally suck at concisely conveying choices, so Google making the first option the one you want to use is obviously incredibly important.

The updated functionality will be coming to Android phones, Google’s iOS Assistant app and smart devices like Home.

http://business.financialpost.com/technology/alexa-start-the-meeting-amazon-brings-voice-command-technology-to-the-workplace

‘Alexa, start the meeting’: Amazon brings voice-command technology to the workplace

The skills can be accessed through Amazon’s Echo digital speakers and incorporated into workplace software

Amazon.com Inc. announced new voice-activated tools for the workplace, hoping that verbal commands — “Alexa, print my spreadsheet” — will handle common office tasks.

Alexa for Business will let users issue voice commands to begin a video conference or print documents, among a multitude of common workplace functions, Amazon said Thursday at its cloud computing conference in Las Vegas.

“You no longer ever have to dial in a conference ID,” Amazon’s Chief Technology Officer Werner Vogels said, introducing the service. “Just say ‘Alexa, start the meeting.’”

The skills can be accessed through Amazon’s Echo digital speakers and incorporated into workplace software. Amazon wants to bring to the office its voice-activated technology that customers are using to control thermostats and order pizzas from home. The company is seeking to make Alexa ubiquitous in users’ lives and views voice-command as the next wave of accessing technology, similar to the mouse on a personal computer and touch screens on smartphones.

Amazon’s product will compete with Microsoft Corp., which is pitching its Cortana voice-activated digital assistant and conference-call programs for similar office tasks. Alexa for Business will be able to use calendars and contact information stored in Microsoft’s popular Exchange software, Vogels said.

Customers already using Alexa for Business include WeWork Cos., which is installing Echo devices in its shared office spaces, and Capital One Financial Corp., Vogels said.

“Once you are used to a more natural way of interacting with your environment, you will not go back,” Vogels said. “If voice is a natural way to interact with your home, why don’t we build something you can interact with at work as well?”

Bloomberg News

http://www.kurzweilai.net/new-nanomaterial-quantum-encryption-system-could-be-ultimate-defenses-against-hackers

New nanomaterial, quantum encryption system could be ultimate defenses against hackers

November 29, 2017

New physically unclonable nanomaterial (credit: Abdullah Alharbi et al./ACS Nano)

Recent advances in quantum computers may soon give hackers access to machines powerful enough to crack even the toughest of standard internet security codes. With these codes broken, all of our online data — from medical records to bank transactions — could be vulnerable to attack.

Now, a new low-cost nanomaterial developed by New York University Tandon School of Engineering researchers can be tuned to act as a secure authentication key to encrypt computer hardware and data. The layered molybdenum disulfide (MoS2) nanomaterial cannot be physically cloned (duplicated) — replacing programming, which can be hacked.

In a paper published in the journal ACS Nano, the researchers explain that the new nanomaterial has the highest possible level of structural randomness, making it physically unclonable. It achieves this with randomly occurring regions that alternately emit or do not emit light. When exposed to light, this pattern can be used to create a one-of-a-kind binary cryptographic authentication key that could secure hardware components at minimal cost.

The research team envisions a future in which similar nanomaterials can be inexpensively produced at scale and applied to a chip or other hardware component. “No metal contacts are required, and production could take place independently of the chip fabrication process,” according to Davood Shahrjerdi, Assistant Professor of Electrical and Computer Engineering. “It’s maximum security with minimal investment.”

The National Science Foundation and the U.S. Army Research Office supported the research.

A high-speed quantum encryption system to secure the future internet

Schematic of the experimental quantum key distribution setup (credit: Nurul T. Islam et al./Science Advances)

Another approach to the hacker threat is being developed by scientists at Duke UniversityThe Ohio State Universityand Oak Ridge National Laboratory. It would use the properties that drive quantum computers to create theoretically hack-proof forms of quantum data encryption.

Called quantum key distribution (QKD), it takes advantage of one of the fundamental properties of quantum mechanics: Measuring tiny bits of matter like electrons or photons automatically changes their properties, which would immediately alert both parties to the existence of a security breach. However, current QKD systems can only transmit keys at relatively low rates — up to hundreds of kilobits per second — which are too slow for most practical uses on the internet.

The new experimental QKD system is capable of creating and distributing encryption codes at megabit-per-second rates — five to 10 times faster than existing methods and on a par with current internet speeds when running several systems in parallel. In an online open-access article in Science Advances, the researchers show that the technique is secure from common attacks, even in the face of equipment flaws that could open up leaks.

This research was supported by the Office of Naval Research, the Defense Advanced Research Projects Agency, and Oak Ridge National Laboratory.


Abstract of Physically Unclonable Cryptographic Primitives by Chemical Vapor Deposition of Layered MoS2

Physically unclonable cryptographic primitives are promising for securing the rapidly growing number of electronic devices. Here, we introduce physically unclonable primitives from layered molybdenum disulfide (MoS2) by leveraging the natural randomness of their island growth during chemical vapor deposition (CVD). We synthesize a MoS2monolayer film covered with speckles of multilayer islands, where the growth process is engineered for an optimal speckle density. Using the Clark–Evans test, we confirm that the distribution of islands on the film exhibits complete spatial randomness, hence indicating the growth of multilayer speckles is a spatial Poisson process. Such a property is highly desirable for constructing unpredictable cryptographic primitives. The security primitive is an array of 2048 pixels fabricated from this film. The complex structure of the pixels makes the physical duplication of the array impossible (i.e., physically unclonable). A unique optical response is generated by applying an optical stimulus to the structure. The basis for this unique response is the dependence of the photoemission on the number of MoS2 layers, which by design is random throughout the film. Using a threshold value for the photoemission, we convert the optical response into binary cryptographic keys. We show that the proper selection of this threshold is crucial for maximizing combination randomness and that the optimal value of the threshold is linked directly to the growth process. This study reveals an opportunity for generating robust and versatile security primitives from layered transition metal dichalcogenides.


Abstract of Provably secure and high-rate quantum key distribution with time-bin qudits

The security of conventional cryptography systems is threatened in the forthcoming era of quantum computers. Quantum key distribution (QKD) features fundamentally proven security and offers a promising option for quantum-proof cryptography solution. Although prototype QKD systems over optical fiber have been demonstrated over the years, the key generation rates remain several orders of magnitude lower than current classical communication systems. In an effort toward a commercially viable QKD system with improved key generation rates, we developed a discrete-variable QKD system based on time-bin quantum photonic states that can generate provably secure cryptographic keys at megabit-per-second rates over metropolitan distances. We use high-dimensional quantum states that transmit more than one secret bit per received photon, alleviating detector saturation effects in the superconducting nanowire single-photon detectors used in our system that feature very high detection efficiency (of more than 70%) and low timing jitter (of less than 40 ps). Our system is constructed using commercial off-the-shelf components, and the adopted protocol can be readily extended to free-space quantum channels. The security analysis adopted to distill the keys ensures that the demonstrated protocol is robust against coherent attacks, finite-size effects, and a broad class of experimental imperfections identified in our system.

http://blog.wolfram.com/2017/11/30/finding-x-in-espresso-adventures-in-computational-lexicology/

Finding X in Espresso: Adventures in Computational Lexicology

November 30, 2017 — Vitaliy Kaurov, Technical Communication & Strategy

When Does a Word Become a Word?

“A shot of expresso, please.” “You mean ‘espresso,’ don’t you?” A baffled customer, a smug barista—media is abuzz with one version or another of this story. But the real question is not whether “expresso” is a correct spelling, but rather how spellings evolve and enter dictionaries. Lexicographers do not directly decide that; the data does. Long and frequent usage may qualify a word for endorsement. Moreover, I believe the emergent proliferation of computational approaches can help to form an even deeper insight into the language. The tale of expresso is a thriller from a computational perspective.

X in expresso data analysis poster

In the past I had taken the incorrectness of expresso for granted. And how could I not, with the thriving pop-culture of “no X in espresso” posters, t-shirts and even proclamations from music stars such as “Weird Al” Yankovic. Until a statement in a recent note by Merriam-Webster’s online dictionary caught my eye: “… expresso shows enough use in English to be entered in the dictionary and is not disqualified by the lack of an x in its Italian etymon.” Can this assertion be quantified? I hope this computational treatise will convince you that it can. But to set the backdrop right, let’s first look into the history.

Expresso in video segmentNo X in espresso poster

History of Industry and Language

In the 19th century’s steam age, many engineers tackled steam applications accelerating the coffee-brewing process to increase customer turnover, as coffee was a booming business in Europe. The original espresso machine is usually attributed to Angelo Moriondo from Turin, who obtained a patent in 1884 for “new steam machinery for the economic and instantaneous confection of coffee beverage.” But despite further engineering improvements (see the Smithsonian), for decades espresso remained only a local Italian delight. And for words to jump between languages, industries need to jump the borders—this is how industrial evolution triggers language evolution. The first Italian to truly venture the espresso business internationally was Achille Gaggia, a coffee bartender from Milan.

Expresso timeline

In 1938 Gaggia patented a new method using the celebrated lever-driven piston mechanism allowing new record-brewing pressures, quick espresso shots and, as a side effect, even crema foam, a future signature of an excellent espresso. This allowed the Gaggia company (founded in 1948) to commercialize the espresso machines as a consumer product for use in bars. There was about a decade span between the original 1938 patent and its 1949 industrial implementation.

Original espresso maker

Around 1950, espresso machines began crossing Italian borders to the United Kingdom, America and Africa. This is when the first large spike happens in the use of the word espresso in the English language. The spike and following rapid growth are evident from the historic WordFrequencyDataof published English corpora plotted across the 20th century:

history[w_] :=   WordFrequencyData[w, "TimeSeries", {1900, 2000}, IgnoreCase -> True]

The function above gets TimeSeries data for the frequencies of words w in a fixed time range from 1900–2000 that, of course, can be extended if needed. The data can be promptly visualized with DateListPlot:

DateListPlot[history[{"espresso", "expresso"}], PlotRange -> All,   PlotTheme -> "Wide"]

The much less frequent expresso also gains its popularity slowly but steadily. Its simultaneous growth is more obvious with the log-scaled vertical frequency axis. To be able to easily switch between log and regular scales and also improve the visual comprehension of multiple plots, I will define a function:

vkWordFreqPlot[list_, plot_] :=    plot[MovingAverage[#, 3] & /@      WordFrequencyData[list, "TimeSeries", {1900, 2000},       IgnoreCase -> True], PlotTheme -> "Detailed", AspectRatio -> 1/3,     Filling -> Bottom, PlotRange -> All, InterpolationOrder -> 2,     PlotLegends -> Placed[Automatic, {Left, Top}]];

The plot below also compares the espresso/expresso pair to a typical pair acknowledged by dictionaries, unfocused/unfocussed, stemming from American/British usage:
vkWordFreqPlot[{"espresso", "expresso", "unfocused",    "unfocussed"}, DateListLogPlot]

The overall temporal behavior of frequencies for these two pairs is quite similar, as it is for many other words of alternative orthography acknowledged by dictionaries. So why is espresso/expressoso controversial? A good historical account is given by Slate Magazine, which, as does Merriam-Webster, supports the official endorsement of expresso. And while both articles give a clear etymological reasoning, the important argument for expresso is its persistent frequent usage (even in such distinguished publications as The New York Times). As it stands as of the date of this blog, the following lexicographic vote has been cast in support of expresso by some selected trusted sources I scanned through. Aye: Merriam-Webster online, Harper Collins online, Random Houseonline. Nay: Cambridge Dictionary online, Oxford Learner’s Dictionaries online, Oxford Dictionariesonline (“The spelling expresso is not used in the original Italian and is strictly incorrect, although it is common”; see also the relevant blog), Garner’s Modern American Usage, 3rd edition (“Writers frequently use the erroneous form [expresso]”).

In times of dividing lines, data helps us to refocus on the whole picture and dominant patterns. To stress diversity of alternative spellings, consider the pair amok/amuck:

vkWordFrequencyPlot[{"amok", "amuck"}, DateListPlot]

Of a rather macabre origin, amok came to English around the mid-1600s from the Malay amuk, meaning “murderous frenzy,” referring to a psychiatric disorder of a manic urge to murder. The pair amok/amuck has interesting characteristics. Both spellings can be found in dictionaries. The WordFrequencyData above shows the rich dynamics of oscillating popularity, followed by the competitive rival amuck becoming the underdog. The difference in orthography does not have a typical British/American origin, which should affect how alternative spellings are sampled for statistical analysis further below. And finally, the Levenshtein EditDistance is not equal to 1…

EditDistance["amok", "amuck"]

… in contrast to many typical cases such as:

EditDistance @@@ {{"color", "colour"}, {"realize",     "realise"}, {"aesthetic", "esthetic"}}

This will also affect the sampling of data. My goal is to extract from a dictionary a data sample large enough to describe the diversity of alternatively spelled words that are also structurally close to the espresso/expresso pair. If the basic statistics of this sample assimilate the espresso/expressopair well, then it quantifies and confirms Merriam-Webster’s assertion that “expresso shows enough use in English to be entered in the dictionary.” But it also goes a step further, because now all pairs from the dictionary sample can be considered as precedents for legitimizing expresso.

Dictionary as Data

Alternative spellings come in pairs and should not be considered separately because there is statistical information in their relation to each other. For instance, the word frequency of expresso should not be compared with the frequency of an arbitrary word in a dictionary. Contrarily, we should consider an alternative spelling pair as a single data point with coordinates {f+, f} denoting higher/lower word frequency of more/less popular spelling correspondingly, and always in that order. I will use the weighted average of a word frequency over all years and all data corpora. It is a better overall metric than a word frequency at a specific date, and avoids the confusion of a frequency changing its state between higher f+ and lower f at different time moments (as we saw for amok/amuck). Weighted average is the default value of WordFrequencyData when no date is specified as an argument.

The starting point is a dictionary that is represented in the Wolfram Language by WordList and contains 84,923 definitions:

Length[words = WordList["KnownWords"]]

There are many types of dictionaries with quite varied sizes. There is no dictionary in the world that contains all words. And, in fact, all dictionaries are outdated as soon as they are published due to continuous language evolution. My assumption is that the exact size or date of a dictionary is unimportant as long as it is “modern and large enough” to produce a quality sample of spelling variants. The curated built-in data of the Wolfram Language, such as WordList, does a great job at this.

We notice right away that language is often prone to quite simple laws and patterns. For instance, it is widely assumed that lengths of words in an English dictionary…

Histogram[StringLength[words], Automatic, "PDF",   PlotTheme -> "Detailed", PlotRange -> All]

… follow quite well one of the simplest statistical distributions, the PoissonDistribution. The Wolfram Language machine learning function FindDistribution picks up on that easily:

FindDistribution[StringLength[words]]

Show[%%, DiscretePlot[PDF[%, k], {k, 0, 33}, Joined -> True]]

My goal is to search for such patterns and laws in the sample of alternative spellings. But first they need to be extracted from the dictionary.

Extracting Spelling Variants

For ease of data processing and analysis, I will make a set of simplifications. First of all, only the following basic parts of speech are considered to bring data closer to the espresso/expresso case:

royalTypes = {"Noun", "Adjective", "Verb", "Adverb"};

This reduces the dictionary to 84,487 words:

royals = DeleteDuplicates[    Flatten[WordList[{"KnownWords", #}] & /@ royalTypes]]; Length[royals]

Deletion of duplicates is necessary, because the same word can be used as several parts of speech. Further, the words containing any characters beyond the lowercase English alphabet are excluded:

outlaws = Complement[Union[Flatten[Characters[words]]], Alphabet[]]

This also removes all proper names, and drops the number of words to 63,712:

laws = Select[royals, ! StringContainsQ[#, outlaws] &]; Length[laws]

Every word is paired with the list of its definitions, and every list of definitions is sorted alphabetically to ensure exact matches in determining alternative spellings:

Define[w_] := w -> Sort[WordDefinition[w]]; defs = Define /@ laws;

Next, words are grouped by their definitions; single-word groups are removed, and definitions themselves are removed too. The resulting dataset contains 8,138 groups:

samedefs =   Replace[GatherBy[defs, Last], {_ -> _} :> Nothing, 1][[All, All, 1]]

Length[samedefs]

Different groups of words with the same definition have a variable number of words n ≥ 2…

Framed[TableForm[Transpose[groups = Sort[Tally[Length /@ samedefs]]],    TableHeadings -> {groupsHead = {"words, n", "groups, m"}, None},    TableSpacing -> {1, 2}]]

… where m is the number of groups. They follow a remarkable power law. Very roughly for order for magnitudes m~200000 n-5.

Show[ListLogLogPlot[groups, PlotTheme -> "Business",    FrameLabel -> groupsHead],  Plot[Evaluate[Fit[Log[groups], {1, x}, x]], {x, Log[2], Log[14]},    PlotStyle -> Red]]

Close synonyms are often grouped together:

Select[samedefs, Length[#] == 10 &]

This happens because WordDefinition is usually quite concise:

WordDefinition /@ {"abjure", "forswear", "recant"}

To separate synonyms from alternative spellings, I could use heuristics based on orthographic rules formulated for classes such as British versus American English. But that would be too complex and unnecessary. It is much easier to consider only word pairs that differ by a small Levenshtein EditDistance. It is highly improbable for synonyms to differ by just a few letters, especially a single one. So while this excludes not only synonyms but also alternative spellings such as amok/amuck, it does help to select words closer to espresso/expresso and hopefully make the data sample more uniform. The computations can be easily generalized to a larger Levenshtein EditDistance, but it would be important and interesting to first check the most basic case:

EditOne[l_] :=    l[[#]] & /@ Union[Sort /@ Position[Outer[EditDistance, l, l], 1]]; samedefspair = Flatten[EditOne /@ samedefs, 1]

This reduces the sample size to 2,882 pairs:

Length[samedefspair]

Mutations of Spellings

Alternative spellings are different orthographic states of the same word that have different probabilities of occurrence in the corpora. They can inter-mutate based on the context or environment they are embedded into. Analysis of such mutations seems intriguing. The mutations can be extracted with help of the SequenceAlignment function. It is based on algorithms from bioinformatics identifying regions of similarity in DNA, RNA or protein sequences, and often wandering into other fields such as linguistics, natural language processing and even business and marketing research. The mutations can be between two characters or a character and a “hole” due to character removal or insertion:

SequenceAlignment @@@ {{"color", "colour"}, {"mesmerise",     "mesmerize"}}

In the extracted mutations’ data, the “hole” is replaced by a dash (-) for visual distinction:

mutation =   Cases[SequenceAlignment @@@ samedefspair, _List, {2}] /. "" -> "-"

The most probable letters to participate in a mutation between alternative spellings can be visualized with Tally. The most popular letters are s and z thanks to the British/American endings -ise/-ize, surpassed only by the popularity of the “hole.” This probably stems from the fact that dropping letters often makes orthography and phonetics easier.

vertex = Association[Rule @@@ SortBy[Tally[Flatten[mutation]], Last]]; optChart = {ColorFunction -> "Rainbow", BaseStyle -> 15,     PlotTheme -> "Web"}; inChar = PieChart[vertex, optChart, ChartLabels -> Callout[Automatic],     SectorOrigin -> -Pi/9]; BarChart[Reverse[vertex], optChart, ChartLabels -> Automatic,  Epilog -> Inset[inChar, Scaled[{.6, .5}], Automatic, Scaled[1.1]]]

Querying Word Frequencies

The next step is to get the WordFrequencyData for all
2 x 2882 = 5764 words of alternative spelling stored in the variable samedefspairWordFrequencyData is a very large dataset, and it is stored on Wolfram servers. To query frequencies for a few thousands words efficiently, I wrote some special code that can be found in the notebook attached at the end of this blog. The resulting data is an Association containing alternative spellings with ordered pairs of words as keys and ordered pairs of frequencies as values. The higher-frequency entry is always first:

data

The size of the data is slightly less than the original queried set because for some words, frequencies are unknown:

{Length[data], Length[samedefspair] - Length[data]}

Basic Analysis

Having obtained the data, I am now ready to check how well the frequencies of espresso/expresso fall within this data:

esex = Values[   WordFrequencyData[{"espresso", "expresso"}, IgnoreCase -> True]]

As a start, I will examine if there are any correlations between lower and higher frequencies. Pearson’s Correlation coefficient, a measure of the strength of the linear relationship between two variables, gives a high value for lower versus higher frequencies:

Correlation @@ Transpose[Values[data]]

But plotting frequency values at their natural scale hints that a log scale could be more appropriate:

ListPlot[Values[data], AspectRatio -> Automatic,   PlotTheme -> "Business", PlotRange -> All]

And indeed for log-values of frequencies, the Correlation strength is significantly higher:

Correlation @@ Transpose[Log[Values[data]]]

Fitting the log-log of data reveals a nice linear fit…

lmf = LinearModelFit[Log[Values[data]], x, x]; lmf["BestFit"]

… with sensible statistics of parameters:

lmf["ParameterTable"]

In the frequency space, this shows a simple and quite remarkable power law that sheds light on the nature of correlations between the frequencies of less and more popular spellings of the same word:

Reduce[Log[SubMinus[f]] == lmf["BestFit"] /.    x -> Log[SubPlus[f]], SubMinus[f], Reals]

Log-log space gives a clear visualization of the data. Obviously due to {greater, smaller} sorting of coordinates {f+, f}, all data points cannot exceed the Log[f]==Log[f+] limiting orange line. The purple line is the linear fit of the power law. The red circle is the median of the data, and the red dot is the value of the espresso/expresso frequency pair:
ListLogLogPlot[data, PlotRange -> All, AspectRatio -> Automatic,   PlotTheme -> "Detailed",  		ImageSize -> 800, Epilog -> {{Purple, Thickness[.004], Opacity[.4],     	Line[Transpose[{{-30, 0}, Normal[lmf] /. x -> {-30, 0}}]]},    		{Orange, Thickness[.004], Opacity[.4],      Line[{-30 {1, 1}, -10 {1, 1}}]},    		{Red, Opacity[.5], PointSize[.02], Point[Log[esex]]},    		{Red, Opacity[.5], Thickness[.01],      Circle[Median[Log[Values[data]]], .2]}}]

A simple, useful transformation of the coordinate system will help our understanding of the data. Away from log-frequency vs. log-frequency space we go. The distance from a data point to the orange line Log[f]==Log[f+] is the measure of how many times larger the higher frequency is than the lower. It is given by a linear transformation—rotation of the coordinate system by 45 degrees. Because this distance is given by difference of logs, it relates to the ratio of frequencies:

TraditionalForm[PowerExpand[Log[(SubPlus[f]/SubMinus[f])^2^(-1/2)]]]

This random variable is well fit by the very famous and versatile WeibullDistribution, which is used universally for weather forecasting to describe wind speed distributions; survival analysis; reliability, industrial and electrical engineering; extreme value theory; forecasting technological change; and much more—including, now, word frequencies:

dist = FindDistribution[   trans = (#1 - #2)/Sqrt[2] & @@@ Log[Values[data]]]

One of the most fascinating facts is “The Unreasonable Effectiveness of Mathematics in the Natural Sciences,” which is the title of a 1960 paper by the physicist Eugene Wigner. One of its notions is that mathematical concepts often apply uncannily and universally far beyond the context in which they were originally conceived. We might have glimpsed at that in our data.

Using statistical tools, we can figure out that in the original space the frequency ratio obeys a distribution with a nice analytic formula:

Assuming[SubPlus[f]/SubMinus[f] > 1,   PDF[TransformedDistribution[E^(Sqrt[2] u),     u \[Distributed] WeibullDistribution[a, b]], SubPlus[f]/SubMinus[   f]]]

It remains to note that the other corresponding transformed coordinate relates to the frequency product…

TraditionalForm[PowerExpand[Log[(SubPlus[f] SubMinus[f])^2^(-1/2)]]]

… and is the position of a data point along the orange line Log[f]==Log[f+]. It reflects how popular, on average, a specific word pair is among other pairs. One can see that the espresso/expresso value lands quite above the median, meaning the frequency of its usage is higher than half of the data points.

Nearest can find the closest pairs to espresso/expresso measured by EuclideanDistance in the frequency space. Taking a look at the 50 nearest pairs shows just how typical the frequencies espresso/expresso are, shown below by a red dot. Many nearest neighbors, such as energize/energise and zombie/zombi, belong to the basic everyday vocabulary of most frequent usage:

neighb = Nearest[data, esex, 50]; ListPlot[Association @@ Thread[neighb -> data /@ neighb],  	PlotRange -> All, AspectRatio -> Automatic, PlotTheme -> "Detailed",  	Epilog -> {{Red, Opacity[.5], PointSize[.03], Point[esex]}}]

The temporal behavior of frequencies for a few nearest neighbors shows significant diversity and often is generally reminiscent of such behavior for the espresso/expresso pair that was plotted at the beginning of this article:

Multicolumn[vkWordFreqPlot[#, DateListPlot] & /@ neighb[[;; 10]], 2]

Networks of Mutation

Frequencies allow us to define a direction of mutation, which can be visualized by a DirectedEdge always pointing from lower to higher frequency. A Tally of the edges defines weights (or not-normalized probabilities) of particular mutations.

muteWeigh =    Tally[Cases[SequenceAlignment @@@ Keys[data], _List, {2}] /.      "" -> "-"]; edge = Association[Rule @@@ Transpose[{       DirectedEdge @@ Reverse[#] & /@ muteWeigh[[All, 1]],        N[Rescale[muteWeigh[[All, 2]]]]}]];

For clarity of visualization, all edges with weights less than 10% of the maximum value are dropped. The most popular mutation is sz->1, with maximum weight 1. It is interesting to note that reverse mutations might occur too; for instance, zs->0.0347938, but much less often:

cutEdge = ReverseSort[ Select[edge, # > .01 &]]

PieChart[cutEdge, optChart, ChartLabels -> Callout[Automatic]]

Thus a letter can participate in several types of mutations, and in this sense mutations form a network. The size of the vertex is correlated with the probability of a letter to participate in any mutation (see the variable vertex above):

vs = Thread[Keys[vertex] -> 2 N[.5 + Rescale[Values[vertex]]]];

The larger the edge weight, the brighter the edge:

es = Thread[    Keys[cutEdge] -> (Directive[Thickness[.003], Opacity[#]] & /@        N[Values[cutEdge]^.3])];

The letters r and g participate mostly in the deletion mutation. Letters with no edges participate in very rare mutations.

graphHighWeight =   Graph[Keys[vertex], Keys[cutEdge], PerformanceGoal -> "Quality",   VertexLabels -> Placed[Automatic, Center], VertexLabelStyle -> 15,    VertexSize -> vs, EdgeStyle -> es]

Among a few interesting substructures, one of the obvious is the high clustering of vowels. A Subgraph of vowels can be easily extracted…

vowels = {"a", "e", "i", "o", "u"}; Subgraph[graphHighWeight, vowels, GraphStyle -> "SmallNetwork"]

… and checked for completeness, which yields False due to many missing edges from and to u:

CompleteGraphQ[%]

Nevertheless, as you might remember, the low-weight edges were dropped for a better visual of high-weight edges. Are there any interesting observations related to low-weight edges? As a matter of fact, yes, there are. Let’s quickly rebuild a full subgraph for only vowels. Vertex sizes are still based on the tally of letters in mutations:

vowelsVertex =   Association @@    Cases[Normal[vertex], Alternatives @@ (# -> _ & /@ vowels)]

vsVow = Thread[    Keys[vowelsVertex] -> .2 N[.5 + Rescale[Values[vowelsVertex]]]];

All mutations of vowels in the dictionary can be extracted with the help of MemberQ:

vowelsMute =    Select[muteWeigh, And @@ (MemberQ[vowels, #] & /@ First[#]) &]; vowelsEdge = Association[Rule @@@    Transpose[     MapAt[DirectedEdge @@ Reverse[#] & /@ # &, Transpose[vowelsMute],       1]]]

In order to visualize exactly the number of vowel mutations in the dictionary, the edge style is kept uniform and edge labels are used for nomenclature:

vowelGraph = Graph[Keys[vowelsVertex], Keys[vowelsEdge],   EdgeWeight -> vowelsMute[[All, 2]], PerformanceGoal -> "Quality",    VertexLabels -> Placed[Automatic, Center], VertexLabelStyle -> 20,    VertexSize -> vsVow, EdgeLabels -> "EdgeWeight",    EdgeLabelStyle -> Directive[15, Bold]]

And now when we consider all (even small-weight) mutations, the graph is complete:

CompleteGraphQ[vowelGraph]

But this completeness is quite “weak” in the sense that there are many edges with a really small weight, in particular two edges with weight 1:

Select[vowelsMute, Last[#] == 1 &]

This means that there is only one alternative word pair for eu mutations, and likewise for iomutations. With the help of a lookup function…

lookupMute[l_] := With[{keys = Keys[data]}, keys[[Position[       SequenceAlignment @@@ keys /. "" -> "-",        Alternatives @@ l, {2}][[All, 1]]]]]

… these pairs can be found as:

lookupMute[{{"o", "i"}, {"u", "e"}}]

Thus, thanks to these unique and quite exotic words, our dictionaries have eu and iomutations. Let’s check WordDefinition for these terms:

TableForm[WordDefinition /@ #,     TableHeadings -> {#, None}] &@{"corticofugal", "yarmulke"}

The word yarmulke is a quite curious case. First of all, it has three alternative spellings:

Nearest[WordData[], "yarmulke", {All, 1}]

Additionally, the Merriam-Webster Dictionary suggests a rich etymology: “Yiddish yarmlke, from Polish jarmułka & Ukrainian yarmulka skullcap, of Turkic origin; akin to Turkish yağmurlukrainwear.” The Turkic class of languages is quite wide:

EntityList[EntityClass["Language", "Turkic"]]

Together with the other mentioned languages, Turkic languages mark a large geographic area as the potential origin and evolution of the word yarmulke:

locs = DeleteDuplicates[Flatten[EntityValue[     {EntityClass["Language", "Turkic"],       EntityClass["Language", "Yiddish"], Entity["Language", "Polish"],       Entity["Language", "Ukrainian"]},      EntityProperty["Language", "PrimaryOrigin"]]]]

GeoGraphics[GeoMarker[locs, "Scale" -> Scaled[.03]],   GeoRange -> "World", GeoBackground -> "Coastlines",   GeoProjection -> "WinkelTripel"]

This evolution has Yiddish as an important stage before entering English, while Yiddish itself has a complex cultural history. English usage of yarmulke spikes around 1940–1945, hence World War II and the consequent Cold War era are especially important in language migration, correlated probably to the world migration and changes in Jewish communities during these times.

vkWordFreqPlot[{"yarmulke", "yarmelke", "yarmulka"}, DateListLogPlot]

These complex processes brought many more Yiddish words to English (my personal favorites are golem and glitch), but only a single one resulted in the introduction of the mutation eu in the whole English dictionary (at least within our dataset). So while there are really no sx mutations currently in English (as in espresso/expresso), this is not a negative indicator because there are cases of mutations that are unique to a single or just a few words. And actually, there are many more such mutations with a small weight than with a large weight:

ListLogLogPlot[Sort[Tally[muteWeigh[[All, 2]]]],   PlotTheme -> "Detailed",  PlotRange -> All,   FrameLabel -> {"mutation weight", "number of weights"},   Epilog -> Text[Style["s" \[DirectedEdge] "z", 15], Log@{600, 1.2}],   Filling -> Bottom]

So while the sz mutation happens in 777 words, it is the only mutation with that weight:

MaximalBy[muteWeigh, Last]

On the other hand, there are 61 unique mutations that happen only once in a single word, as can be seen from the plot above. So in this sense, the most weighted sz mutation is an outlier, and if expresso enters a dictionary, then the espresso/expresso pair will join the majority of unique mutations with weight 1. These are the mutation networks for the first four small weights:

vkWeight[n_] := Select[muteWeigh, Last[#] == n &][[All, 1]] vkMutationNetwork[n_] :=   Graph[DirectedEdge @@ Reverse[#] & /@ vkWeight[n],   VertexLabels -> Placed[Automatic, Center], VertexLabelStyle -> 15,   VertexSize -> Scaled[.07], AspectRatio -> 1,    PerformanceGoal -> "Quality",   PlotLabel -> "Mutation Weight = " <> ToString[n]] Grid[Partition[vkMutationNetwork /@ Range[4], 2], Spacings -> {1, 1},   Frame -> All]

As the edge weight gets larger, networks become simpler—degenerating completely for very large weights. Let’s examine a particular set of mutations with a small weight—for instance, weight 2:

DirectedEdge @@ Reverse[#] & /@   Select[muteWeigh, Last[#] == 1 &][[All, 1]]

This means there are only two unique alternative spellings (four words) for each mutation out of the whole dictionary:

Multicolumn[  Row /@ Replace[    SequenceAlignment @@@ (weight2 = lookupMute[vkWeight[2]]) /.      "" -> "-", {x_, y_} :> Superscript[x, Style[y, 13, Red]], {2}], 4]

Red marks a less popular letter, printed as a superscript of the more popular one. While the majority of these pairs are truly alternative spellings with a sometimes curiously dynamic history of usage…

vkWordFreqPlot[{"fjord", "fiord"}, DateListPlot]

… some occasional pairs, like distrust/mistrust, indicate blurred lines between alternative spellings and very close synonyms with close orthographic forms—here the prefixes mis- and dis-. Such rare situations can be considered as a source of noise in our data if someone does not want to accept them as true alternative spellings. My personal opinion is that the lines are blurred indeed, as the prefixes mis- and dis- themselves can be considered alternative spellings of the same semantic notion.

These small-weight mutations (white dots in the graph below) are distributed among the rest of the data (black dots) really well, which reflects on their typicality. This can be visualized by constructing a density distribution with SmoothDensityHistogram, which uses SmoothKernelDistribution behind the scenes:

SmoothDensityHistogram[Log[Values[data]],  Mesh -> 50, ColorFunction -> "DarkRainbow", MeshStyle -> Opacity[.2],  PlotPoints -> 200, PlotRange -> {{-23, -11}, {-24, -12}}, Epilog -> {    {Black, Opacity[.4], PointSize[.002], Point[Log[Values[data]]]},    {White, Opacity[.7], PointSize[.01],      Point[Log[weight2 /. Normal[data]]]},    {Red, Opacity[1], PointSize[.02], Point[Log[esex]]},    {Red, Opacity[1], Thickness[.01],      Circle[Median[Log[Values[data]]], .2]}}]

Some of these very exclusive, rare alternative spellings are even more or less frequently used than espresso/expresso, as shown above for the example of weight 2, and can be also shown for other weights. Color and contour lines provide a visual guide for where the values of density of data points lie.

Conclusion

The following factors affirm why expresso should be allowed as a valid alternative spelling.

  • Espresso/expresso falls close to the median usage frequencies of 2,693 official alternative spellings with Levenshtein EditDistance equal to 1
  • The frequency of espresso/expresso usage as whole pair is above the median, so it is more likely to be found in published corpora than half of the examined dataset
  • Many nearest neighbors of espresso/expresso in the frequency space belong to a basic vocabulary of the most frequent everyday usage
  • The history of espresso/expresso usage in English corpora shows simultaneous growth for both spellings, and by temporal pattern is reminiscent of many other official alternative spellings
  • The uniqueness of the sx mutation in the espresso/expresso pair is typical, as numerous other rare and unique mutations are officially endorsed by dictionaries

So all in all, it is ultimately up to you how to interpret this analysis or spell the name of the delightful Italian drink. But if you are a wisenheimer type, you might consider being a tinge more open-minded. The origin of words, as with the origin of species, has its dark corners, and due to inevitable and unpredictable language evolution, one day your remote descendants might frown on the choice of s in espresso.

http://bgr.com/2017/11/29/iphone-x-battery-case-amazon-6000-mah/

This $35 iPhone X case packs a 6,000 mAh battery that nearly triples your battery life

If you’re like me, you’re pretty surprised at how good the battery life is on the iPhone X. Despite how compact the phone is, it can easily carry you through a full day of normal usage. Of course, problems arise with heavy usage, which is obviously when you need your phone the most. That’s where a gadget like the Vproof iPhone X Battery Case comes into play. It’s reasonably slim and yet it packs a massive 6,000 mAh battery that will nearly triple your iPhone X’s battery life. Definitely check it out.

Here’s what you need to know from the product page:

* ★More Than 150% Extra power–This battery case with powerful 6000mAh capacity for iPhone X helps keep your phone charged the entire day.
* ★4 Level LED Power Indicator–Featuring with the LED battery level indicator, this battery case could let you know exactly the power charged or left (0–25%–50%–75%–100%); Besides, you could just switch on/off your battery case with the power switch easily.
* ★Advanced SYNC & Hassle-Free Charging –You can sync data to your Macbook, PC or laptop and charge your phone at same time without removing the case; Besides, with this battery case, you can charge your iPhone in any place , any time you want. ANY WIRED HEADPHONES IS UNAVAILABLE WHEN CHARGING.
* ★Built-in Magnet Metal & Portable–With the built-in magnetic metal, this case could work well with magnetic car mount directly when driving(Car Mount Not Included); Handled design, could put in your bag and hold by only one hand easily.
* ★100% Money Back Guarantee–If you are not satisfied with the 6000mAh battery case for iPhone X, please let us know and we will issue you a full refund or replacement to make you be a happy customer.

http://en.brinkwire.com/2703/trigger-for-most-common-form-of-vision-loss-discovered/

Trigger for most common form of vision loss discovered

IMAGE

In a major step forward in the battle against macular degeneration, the leading cause of vision loss among the elderly, researchers at the University of Virginia School of Medicine have discovered a critical trigger for the damaging inflammation that ultimately robs millions of their sight. The finding may allow doctors to halt the inflammation early on, potentially saving patients from blindness.

“Almost 200 million people in the world have macular degeneration. If macular degeneration were a country, it would be the eighth most populated nation in the world. That’s how large a problem this is,” said Jayakrishna Ambati, MD, vice chairman for research of UVA’s Department of Ophthalmology and the founding director of UVA’s Center for Advanced Vision Science. “For the first time, we know in macular degeneration what is one of the very first events that triggers the system to get alarmed and start, to use an anthropomorphic term, hyperventilating. This overdrive of inflammation is what ultimately damages cells, and so, potentially, we have a way of interfering very early in the process.”

Potential New Treatment for Macular Degeneration

Ambati and Nagaraj Kerur, PhD, assistant professor in the Department of Ophthalmology, and their laboratories have determined that the culprit is an enzyme called cGAS. The enzyme plays an important role in the body’s immune response to infections by detecting foreign DNA. But the molecule’s newly identified role in the “dry” form of age-related macular degeneration comes as wholly unexpected.

“It’s really surprising that in macular degeneration, which, as far as we know, has nothing to do with viruses or bacteria, that cGAS is activated, and that this alarm system is turned on,” Ambati said. “This is what leads to the killing of the cells in the retina, and, ultimately, vision loss.”

The researchers noted that cGAS may be an alarm not just for pathogens but for other harmful problems that warrant responses from the immune system. The enzyme may also play important roles in conditions such as diabetes, lupus and obesity, and researchers already are working to create drugs that could inhibit its function. “Because the target we’re talking about is an enzyme, we could develop small molecules that could block it,” Kerur said. “There are many drugs already on the market that target specific enzymes, such as the statins [which are used to lower cholesterol levels.]”

The promising new lead comes as good news for researchers seeking to develop new treatments for dry macular degeneration, as clinical trials in recent years have come to dead end after dead end.

The UVA researchers expect the development of a drug to inhibit cGAS will take several years, and that drug would then need to go through extensive testing to determine its safety and effectiveness for combating macular degeneration.

The researchers also hope to develop a way to detect the levels of the enzyme in patients’ eyes. That would let them determine when best to administer a treatment that blocks cGAS. “If they have high levels of this enzyme in their eye, they might be a wonderful candidate for this sort of treatment,” Ambati said. “This is really precision medicine at the single-molecule level.”

Findings Published

The findings have been published in the prestigious scientific journal Nature Medicine. The research team consisted of Kerur, Shinichi Fukuda, Daipayan Banerjee, Younghee Kim, Dongxu Fu, Ivana Apicella, Akhil Varshney, Reo Yasuma, Benjamin J. Fowler, Elmira Baghdasaryan, Kenneth M. Marion, Xiwen Huang, Tetsuhiro Yasuma, Yoshio Hirano, Vlad Serbulea, Meenakshi Ambati, Vidya L. Ambati, Yuji Kajiwara, Kameshwari Ambati, Shuichiro Hirahara, Ana Bastos-Carvalho, Yuichiro Ogura, Hiroko Terasaki, Tetsuro Oshika, Kyung Bo Kim, David R. Hinton, Norbert Leitinger, John C. Cambier, Joseph D. Buxbaum, M. Cristina Kenney, S. Michal Jazwinski, Hiroshi Nagai, Isao Hara, A. Phillip West, Katherine A. Fitzgerald, SriniVas R. Sadda, Bradley D. Gelfand and Ambati.

The work was generously supported by many funding agencies, including the Director’s Pioneer Award from the National Institutes of Health (NIH) and grants from the NIH National Eye Institute and the John Templeton Foundation.

Ambati is co-founder of iVeena and Inflammasome Therapeutics, companies that develops products to battle macular degeneration and other inflammatory disorders. He, Kerur, Fowler and Kameshwari Ambati also are named as inventors on patent applications related to macular degeneration.

To keep up with the latest medical research news from UVA, subscribe to the Making of Medicine blog at http://makingofmedicine.virginia.edu.

https://www.techrepublic.com/article/4-iphone-x-face-id-tips-tricks-and-cool-features/

4 iPhone X Face ID tips, tricks, and cool features

Apple’s Face ID is one of the hot new features with iPhone X. Learn how to manage Face ID settings, and discover cool iOS features associated with Face ID that you might be overlooking.

 Apple’s Face ID, which is one of the biggest features on the iPhone X, allows you to unlock your iPhone, authenticate with Apple Pay, and more using only your face. Face ID is more than just an authentication mechanism—it also provides useful iPhone X features, including the ability to lock out notifications to anyone but you.

Here is information on how to reset Face ID if it’s not working properly, as well as Face ID tips and tricks to make your iPhone X more secure.

SEE: Mobile device computing policy (Tech Pro Research)

How to reset Face ID

If you set up your iPhone X hastily, you might have incorrectly calibrated Face ID during the setup process. You can easily reset Face ID and recalibrate it if you’re having difficulty unlocking the device. Follow these steps.

  1. Open the Settings app.
  2. Navigate to Face ID & Passcode.
  3. Enter your iOS passcode to gain access to these settings.
  4. Tap Reset Face ID and follow the instructions for setting up Face ID again.

This will give you an opportunity to properly position your face in frame and complete the setup a second time if you’re having difficulty with Face ID.

How to use Face ID with third-party apps

Touch ID is not on the iPhone X—Face ID is now the authentication method used by apps that previously enabled Touch ID. When you first launch an app that can authenticate with Face ID/Touch ID, you’ll get a prompt from iOS asking if you’d like to give the app access to Face ID. Enabling this feature will give the app the ability to authenticate you with Face ID when logging in.

If you wish to see a list of the apps that have access and/or revoke or give access to previously denied apps, follow these steps.

  1. Open the Settings app.
  2. Navigate to Face ID & Passcode | Other Apps (Figure A).

Figure A

iphonexfaceidfigurea.jpg

The Other Apps section of Face ID & Passcode settings let you view apps that have access to authenticate you with Face ID.

In this section you will be able to see all of the apps that have requested access to authenticate you with Face ID. Enabled apps are apps that you’ve given access to, and disabled apps are the ones that you’ve denied access to. This is where you can give or revoke access to any of the apps listed. Apps that have been denied will fall back to using a password-based authentication method.

 

How to enable or disable the attention aware features

iPhone X can react differently if you’re looking at your device and giving it your direct attention. From hiding notifications on the Lock Screen to lowering the volume of alerts coming in, Apple will likely expand this functionality with future iOS updates.

More about Mobility

Field of digital dreams: Why MLB is betting its future on big data, Wi-Fi, apps, and AR

It’s a whole new ball game for Major League Baseball in tech upgrades. TechRepublic goes inside the digital transformations of two of the league’s most storied franchises.

To enable the attention aware features (or disable them if you don’t like them), follow these steps.

  1. Open the Settings app.
  2. Navigate to Face ID & Passcode.
  3. Enable (or disable) the option for Attention Aware Features.

Once enabled, the camera will check for your attention before dimming the display or lowering the volume of alerts. You’ll also see that the Lock Screen hides notification text.

SEE: Face ID is so good you won’t miss the Home button (ZDNet)

How to require your eyes are open to unlock the iPhone X

You may have noticed that when you unlock your iPhone X with Face ID, by default, it requires that your eyes are open and looking at the device. When enabled, this option uses the TrueDepth camera to provide an additional level of security; this will ensure that you are looking at the iPhone X with your eyes opened before it will unlock the device. With this disabled, only a facial feature check will be performed. If you’re concerned about security you should keep this option enabled.

You can disable this feature, or re-enable it by following these steps.

  1. Open the Settings app.
  2. Navigate to Face ID & Passcode.
  3. Enable the option for Require Attention for Face ID.

https://news.ubc.ca/2017/11/29/to-be-happier-spend-money-on-avoiding-household-chores-2/

To be happier, spend money on avoiding household chores

Forbes India reported on a UBC psychology study that suggests spending money to avoid household chores can lead to increased happiness.

Ashley Whillans, now at Harvard Business School, carried out the research as a UBC PhD candidate along with UBC psychology professor Elizabeth Dunn and others.

https://news.ubc.ca/2017/11/29/keeping-score-of-friends-on-facebook-instagram-may-harm-your-health/

Keeping score of ‘friends’ on social media may harm your health

The Conversation published an op-ed by Frances Chen, a UBC psychology professor, and Ashley Whillans at Harvard Business School about their research connecting social media posts and unhappiness.

“Our research suggests that the public nature of social activities can lead people to think that their peers are doing better socially than they are,” they wrote.

The op-ed also appeared on SF Gate and WTOP.