https://medicalxpress.com/news/2020-06-mind-blind-harder.html

Being ‘mind-blind’ may make remembering, dreaming and imagining harder

by University of New South Wales

**Being 'mind-blind' may make remembering, dreaming and imagining harder
Credit: Shutterstock

Picture the sun setting over the ocean.

It’s large above the horizon, spreading an orange-pink glow across the sky. Seagulls are flying overhead and your toes are in the sand.

Many people will have been able to picture the sunset clearly and vividly—almost like seeing the real thing. For others, the image would have been vague and fleeting, but still there.

If your mind was completely blank and you couldn’t visualise anything at all, then you might be one of the 2-5 percent of people who have aphantasia, a condition that involves a lack of all mental visual imagery.

“Aphantasia challenges some of our most basic assumptions about the human mind,” says Mr Alexei Dawes, Ph.D. Candidate in the UNSW School of Psychology.

“Most of us assume visual imagery is something everyone has, something fundamental to the way we see and move through the world. But what does having a ‘blind mind’ mean for the mental journeys we take every day when we imagine, remember, feel and dream?”

Mr Dawes was the lead author on a new aphantasia study, published today in Scientific Reports. It surveyed over 250 people who self-identified as having aphantasia, making it one of the largest studies on aphantasia yet.

“We found that aphantasia isn’t just associated with absent visual imagery, but also with a widespread pattern of changes to other important cognitive processes,” he says.

“People with aphantasia reported a reduced ability to remember the past, imagine the future, and even dream.”

Study participants completed a series of questionnaires on topics like imagery strength and memory. The results were compared with responses from 400 people spread across two independent control groups.

For example, participants were asked to remember a scene from their life and rate the vividness using a five-point scale, with one indicating “No image at all, I only ‘know’ that I am recalling the memory,” and five indicating “Perfectly clear and as vivid as normal vision.”

“Our data revealed an extended cognitive ‘fingerprint’ of aphantasia characterised by changes to imagery, memory, and dreaming,” says Mr Dawes.

“We’re only just starting to learn how radically different the internal worlds of those without imagery are.”

Subsets of aphantasia

While people with aphantasia wouldn’t have been able to picture the image of the sunset mentioned above, many could have imagined the feeling of sand between their toes, or the sound of the seagulls and the waves crashing in.

However, 26 percent of aphantasic study participants reported a broader lack of multi-sensory imagery—including imagining sound, touch, motion, taste, smell and emotion.

“This is the first scientific data we have showing that potential subtypes of aphantasia exist,” says Professor Joel Pearson, senior author on the paper and Director of UNSW Science’s Future Minds Lab.

Interestingly, spatial imagery—the ability to imagine distance or locational relationship between things—was the only form of sensory imagery that had no significant changes across aphantasics and people who could visualise.

“The reported spatial abilities of aphantasics were on par with the control groups across many types of cognitive processes,” says Mr Dawes. “This included when imagining new scenes, during spatial memory or navigation, and even when dreaming.”

In action, spatial cognition could be playing Tetris and imagining how a certain shape would fit into the existing layout, or remembering how to navigate from A to B when driving.

In dreams and memories

While visualising a sunset is a voluntary action, involuntary forms of cognition—like dreaming—were also found to occur less in people with aphantasia.

“Aphantasics reported dreaming less often, and the dreams they do report seem to be less vivid and lower in sensory detail,” says Prof Pearson.

“This suggests that any cognitive function involving a sensory visual component—be it voluntary or involuntary—is likely to be reduced in aphantasia.”

Aphantasic individuals also experienced less vivid memories of their past and reported a significantly lower ability to remember past life events in general.

“Our work is the first to show that aphantasic individuals also show a reduced ability to remember the past and prospect into the future,” says Mr Dawes. “This suggests that visual imagery might play a key role in memory processes.”

Looking ahead

While up to one million Australians could have aphantasia, relatively little is known about it—to date, there have been less than 10 scientific studies on the condition.

More research is needed to deepen our understanding of aphantasia and how it impacts the daily lives of those who experience it.

“If you are one of the million Australians with aphantasia, what do you do when your yoga teacher asks you are asked to ‘visualise a white light’ during a meditation practice?” asks Mr Dawes.

“How do you reminisce on your last birthday, or imagine yourself relaxing on a tropical beach while you’re riding the train home? What’s it like to dream at night without mental images, and how do you ‘count’ sheep before you fall asleep?”

The researchers note that while this study is exciting for its scope and comparatively large sample size, it is based on participants’ self-reports, which are subjective by nature.

Next, they plan to build on the study by using measurements that can be tested objectively, like analysing and quantifying people’s memories.


Explore furtherAphantasia clears the way for a scientific career path


More information: Alexei J. Dawes et al. A cognitive profile of multi-sensory imagery, memory and dreaming in aphantasia, Scientific Reports (2020). DOI: 10.1038/s41598-020-65705-7Journal information:Scientific ReportsProvided by University of New South Wales

https://techxplore.com/news/2020-06-deep-framework-key-players-complex.html

A deep reinforcement learning framework to identify key players in complex networks

by Ingrid Fadelli , Tech Xplore

A deep reinforcement learning framework to identify key players in complex networks
Finding key players in a network. (a) The 9/11 terrorist network, which contains 62 nodes and 159 edges. Nodes represent terrorists involved in the 9/11 attack, and edges represent their social communications. Node size is proportional to its degree. (b) Removing 16 nodes (cyan) with the highest degree (HD) causes considerable damage, rendering a remaining GCC (purple) of 14 nodes. (c) Removing 16 nodes (cyan) with the highest collective-influence (CI) results in a fragmentation and the remaining GCC (purple) contains 18 nodes. (d) FINDER removes only 14 nodes (cyan), but leads to a more fragmented network and the remaining GCC (purple) contains only 9 nodes. Credit: Changjun Fan.

Network science is an academic field that aims to unveil the structure and dynamics behind networks, such as telecommunication, computer, biological and social networks. One of the fundamental problems that network scientists have been trying to solve in recent years entails identifying an optimal set of nodes that most influence a network’s functionality, referred to as key players.

Identifying key players could greatly benefit many real-world applications, for instance, enhancing techniques for the immunization of networks, as well as aiding epidemic control, drug design and viral marketing. Due to its NP-hard nature, however, solving this problem using exact algorithms with polynomial time complexity has proved highly challenging.

Researchers at National University of Defense Technology in China, University of California, Los Angeles (UCLA), and Harvard Medical School (HMS) have recently developed a deep reinforcement learning (DRL) framework, dubbed FINDER, that could identify key players in complex networks more efficiently. Their framework, presented in a paper published in Nature Machine Intelligence, was trained on a small set of synthetic networks generated by classical network models and then applied to real-world scenarios.

“This work was motivated by a fundamental question in network science: How can we find an optimal set of key players whose activation (or removal) would maximally enhance (or degrade) network functionality?” Yang-Yu Liu, one of the senior researchers who carried out the study, told TechXplore. “Many approximate and heuristic strategies have been proposed to deal with specific application scenarios, but we still lack a unified framework to solve this problem efficiently.”

FINDER, which stands for FInding key players in Networks through DEep Reinforcement learning, builds on recently developed deep learning techniques for solving combinatorial optimization problems. The researchers trained FINDER on a large set of small synthetic networks generated by classical network models, guiding it using a reward function specific to the task it is trying to solve. This strategy guides FINDER in determining what it should do (i.e., what node it should pick) to accumulate the greatest reward over a period of time based on its current state (i.e., the current network structure).https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-0536483524803400&output=html&h=280&slotname=8459827939&adk=1069712709&adf=2631371385&w=540&fwrn=4&fwrnh=100&lmt=1593235411&rafmt=1&psa=1&guci=2.2.0.0.2.2.0.0&format=540×280&url=https%3A%2F%2Ftechxplore.com%2Fnews%2F2020-06-deep-framework-key-players-complex.html&flash=0&fwr=0&rpe=1&resp_fmts=3&wgl=1&adsid=NT&dt=1593235411043&bpp=45&bdt=2170&idt=650&shv=r20200624&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3D9d91fd4a09aa449b%3AT%3D1590107819%3AS%3DALNI_MYE9Fe_H64RX8YIJR4NdsSdb8aO8g&correlator=1240405883082&frm=20&pv=2&ga_vid=1553828505.1590107820&ga_sid=1593235412&ga_hid=84288998&ga_fc=0&ga_wpids=UA-73855-17&iag=0&icsg=137011200&dssz=32&mdo=0&mso=0&u_tz=-420&u_his=1&u_java=0&u_h=1050&u_w=1680&u_ah=980&u_aw=1680&u_cd=24&u_nplug=3&u_nmime=4&adx=410&ady=1767&biw=1550&bih=899&scr_x=0&scr_y=0&eid=21066452%2C42530499%2C42530501%2C182982100%2C182982300&oid=3&pvsid=2310929207837288&pem=171&ref=https%3A%2F%2Fnews.google.com%2F&rx=0&eae=0&fc=896&brdim=1%2C24%2C1%2C24%2C1680%2C23%2C1550%2C979%2C1550%2C899&vis=1&rsz=%7C%7ClEbr%7C&abl=CS&pfx=0&fu=8336&bc=31&ifi=1&uci=a!1&btvi=1&fsb=1&xpc=eZCjzvjKex&p=https%3A//techxplore.com&dtd=684

“It might be straightforward to represent states and actions in traditional reinforcement learning tasks, such as in robotics, which is not the case for networks,” Yizhou Sun, another senior researcher involved in the study, told TechXplore. “Another challenge we faced while working on this project was determining how to represent a network, as it has a discrete data structure and lies in an extremely high-dimensional space. To address this issue, we extended the current graph neural network to represent nodes (actions) and graphs (states), which is jointly learned with the reinforcement learning task.”

A deep reinforcement learning framework to identify key players in complex networks
Finding key players in the 9/11 terrorist network, where each node represents a terrorist involved in the 9/11 attack, and edges represent their social communications. Node size is proportional to its degree. Three methods: (a) High Degree (HD); (b) FINDER; (c) Collective Influence (CI). Blue nodes represent nodes in the remaining graph, red nodes indicate the key players identified at the current time step, and gray nodes are the remaining isolated ones. Panel (d) illustrates the three methods’ accumulated normalized connectivity (ANC) curves, which are plotted with the horizontal axis being the fraction of removed nodes, and the vertical axis being the fraction of nodes in the remaining giant connected component (GCC). Credit: Changjun Fan.

In order to efficiently represent complex networks, the researchers collectively determined the best representation for individual network states and actions and the best strategy for identifying an optimal action when the network is in specific states. The resulting representations can guide FINDER in identifying key players in a network.

The new framework devised by Sun, Liu and their colleagues is highly flexible and can thus be applied to the analysis of a variety of real-world networks simply by changing its reward function. It is also highly effective, as it was found to outperform many previously developed strategies for identifying key players in networks both in terms of efficiency and speed. Remarkably, FINDER can easily be scaled up to analyze a wide range of networks containing thousands or even millions of nodes.

“Compared to existing techniques, FINDER achieves superior performances in terms of both effectiveness and efficiency in finding key players in complex networks,” Liu said. “It represents a paradigm shift in solving challenging optimization problems on complex real-world networks. Requiring no domain-specific knowledge, but just the degree heterogeneity of real networks, FINDER achieves this goal by offline self-training on small synthetic graphs only once, and then generalizes surprisingly well across diverse domains of real-world networks with much larger sizes.”

The new deep reinforcement framework has so far achieved highly promising results. In the future, it could be used to study social networks, power grids, the spread of infectious diseases and many other types of network.

The findings gathered by Liu, Sun and their colleagues highlight the promise of classical network models such as the Barabási–Albert model, from which they drew inspiration. While simple models may appear very basic, in fact, they often capture the primary feature of many real-world networks, namely the degree heterogeneity. This feature can be of huge value when trying to solve complex optimization problems related to intricate networks.

“My lab is now pursuing several research directions along this same line of research, including: (1) designing better graph representation learning architectures; (2) exploring how to transfer knowledge between different graphs and even graphs from different domains; (3) investigating other NP-hard problems on graphs and solving them from learning perspective,” Sun said.

While Sun and her team at UCLA plan to work on new techniques for network science research, Liu and his team at HMS would like to start testing FINDER on real biological networks. More specifically, they would like to use the framework to identify key players in protein-protein interaction networks and gene regulatory networks that could play crucial roles in human health and disease.


Explore furtherFramework built for using graph theory to solve discrete optimization problems


More information: Changjun Fan et al. Finding key players in complex networks through deep reinforcement learning, Nature Machine Intelligence (2020). DOI: 10.1038/s42256-020-0177-2

Learning combinatorial optimization algorithms over graphs. papers.nips.cc/paper/7214-lear … gorithms-over-graphs

Reinforcement learning for solving the vehicle routing problem. papers.nips.cc/paper/8190-rein … icle-routing-problem

Neural combinatorial optimization with reinforcement learning. arXiv:1611.09940 [cs.AI]. arxiv.org/abs/1611.09940

Albert-László Barabási et al. Emergence of Scaling in Random Networks, Science (2002). DOI: 10.1126/science.286.5439.509

Combinatorial optimization with graph convolutional networks and guided tree search. papers.nips.cc/paper/7335-comb … d-guided-tree-search

Machine learning for combinatorial optimization: a methodological tour d’horizon. arXiv:1811.06128 [cs.LG]. arxiv.org/abs/1811.06128

James J. Q. Yu et al. Online Vehicle Routing With Neural Combinatorial Optimization and Deep Reinforcement Learning, IEEE Transactions on Intelligent Transportation Systems (2019). DOI: 10.1109/TITS.2019.2909109Journal information:Nature Machine Intelligence Science

https://medicalxpress.com/news/2020-06-unexpected-mental-illnesses-spectrum-rare.html

Unexpected mental illnesses found in a spectrum of a rare genetic disorder

by UC Davis

mental illness
Credit: CC0 Public Domain

UC Davis MIND Institute researchers found an unexpected set of mental illnesses in patients with a spectrum of a rare genetic disorder. Their study revealed the need for clinicians to consider the complexities of co-existing conditions in patients with both psychological and fragile X associated disorders.

Double-hit fragile X spectrum cases

The patients had a “double-hit” condition that combined features and symptoms of fragile X syndrome and premutation disorder.

Fragile X syndrome (FXS), a rare single-gene disorder, is the leading inherited cause of intellectual disability. It is caused by a lack of the fragile X mental retardation protein (FMRP) resulting from a change, called mutation, in the FMR1 gene.

In most people, the CGG section of the FMR1 gene is repeated between 10 to 40 times. In some rare cases, individuals have premutation disorder when their FMR1 gene has 55 to 200 CGG repeats. When this section expands to over 200 repeats, there is a full mutation in the gene. This full mutation causes an inability to produce FMRP and leads to FXS.

The study presented 14 cases of male patients with FMR1-gene mutations and a variety of psychiatric disorders. These patients, ages ranging between nine and 58 years, had features resembling FXS and symptoms common among premutation carriers.

FXS symptoms include hand flapping, hyperactivity, recurrent ear infections, severe anxiety and tantrums. Individuals with FXS frequently have speech and language delays, behavior challenges and symptoms of autism spectrum disorder (ASD).

Premutation, on the other hand, is associated with the development of neurological problems associated with aging. One example of such age-related problems is Fragile X-associated tremor ataxia syndrome (FXTAS). FXTAS is a disease characterized by progressively severe tremor and difficulty with walking and balance. Premutation is also associated with medical and psychiatric problems such as migraines, hypertension, sleep apnea, restless legs syndrome, anxiety and depression.

Neurological and developmental problems

The study found that patients with premutation had a much earlier onset of neurological problems. Some even had earlier symptoms of neurodegeneration, particularly if they had developmental delay or ASD during their childhood. They also showed trouble with their emotional processing.

“Lower levels of FMRP can cause a range of emotional processing issues,” said Andrea Schneider, associate research scientist in the Department of Pediatrics and at UC Davis MIND Institute and the lead author on the study. “Some of the common emotion-related disorders we found are mood disorders, anxiety and psychotic features.”

The researchers called for more studies on the association of psychosis and lower FMRP levels—especially in patients with a double hit condition. The case series also highlighted the need for clinicians to consider additional possible diagnosis for FMR1 mutations in psychiatric patients.

“Clinicians need to be aware of the physical and mental toll on patients with a FMR1 mutation who also show symptoms of psychosis or early onset of neurological problems,” said Paul Hagerman, professor of biochemistry and molecular medicine at UC Davis and co-author on the study. “This understanding helps develop treatment plans that address the multiple needs of these patients.”

The study, titled “Elevated FMR1-mRNA and lowered FMRP – A double-hit mechanism for psychiatric features in men with FMR1 premutations,” appears in the latest issue of the journal Translational Psychiatry.


Explore furtherResearch gauges neurodegeneration tied to FXTAS by measuring motor behavior


More information: Andrea Schneider et al, Elevated FMR1-mRNA and lowered FMRP – A double-hit mechanism for psychiatric features in men with FMR1 premutations, Translational Psychiatry (2020). DOI: 10.1038/s41398-020-00863-wJournal information:Translational PsychiatryProvided by UC Davis

https://thenextweb.com/artificial-intelligence/2020/06/25/what-is-natural-language-processing-and-generation-nlp-nlg-syndication/

A beginner’s guide to natural language processing and generation

by BEN DICKSON — 1 day ago in ARTIFICIAL INTELLIGENCEA beginner’s guide to natural language processing and generationCredit: Unsplash

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

20 years ago, if you had a database table containing your sales information and you wanted to pull up the ten most sold items in the past year, you would have to run a command that looked like this:Government developers aren’t nerdshttps://imasdk.googleapis.com/js/core/bridge3.392.0_en.html#goog_787220989Volume 0% 

SELECT TOP 10 SUM(sale_total) AS total_sales FROM sales 
WHERE sale_date > DATEADD(day, -365, GETDATE()) GROUP BY item_id

TNW Couch ConferencesJoin industry leaders to define new strategies for an uncertain futureREGISTER NOW

Today, performing the same task can be as easy as writing the following query in a platform such as IBM Watson:

Which 10 items sold the most in the past year?

From punch cards to keyboards, mice, and touch screens, human-computer interfacing technologies have undergone major changes, and each change has made it easier to make use of computing resources and power.

But never have those changes been more dramatic than in the past decade, the period in which artificial intelligence turned from a sci-fi myth to everyday reality. Thanks to the advent of machine learning, AI algorithms that learn by examples, we can talk to Alexa, Siri, Cortana, and Google Assistant, and they can talk back to us.

Behind the revolution in digital assistants and other conversational interfaces are natural language processing and generation (NLP/NLG), two branches of machine learning that involve converting human language to computer commands and vice versa.

NLP and NLG have removed many of the barriers between humans and computers, not only enabling them to understand and interact with each other, but also creating new opportunities to augment human intelligence and accomplish tasks that were impossible before.

The challenges of parsing human language

wood-cube-abc-cube-letters-48898

For decades, scientists have tried to enable humans to interact with computers through natural language commands. One of the earliest examples was ELIZA, the first natural language processing application created by the MIT AI Lab in the 1960s. ELIZA emulated the behavior of a psychiatrist and dialogued with users, asking them about their feelings, and giving appropriate responses. ELIZA was followed by PARRY (1972) and Jabberwacky (1988).

Another example is Zork, an interactive adventure game developed in the 1970s, in which the player gave directives by typing sentences in a command line interface, such as “put the lamp and sword in the case.”

The challenge of all early conversational interfaces was that the software powering them was rule-based, which means the programmers had to predict and include all the different forms that a command could be given to the application. The problem with this approach was that first, the code of the program became too convoluted, and second, developers still missed out plenty of the ways that the users might make a request.

As an example, you can ask the weather in countless ways, such as “how’s the weather today?” or “will it rain in the afternoon?”  or “will it be sunny next week?” or “will it be warmer tomorrow?” For a human, understanding and responding to all those different nuances is trivial. But a rule-based software needs explicit instructions for every possible variation, and it has to take into account typos, grammatical errors and more.

The sheer amount of time and energy required to accommodate for all those different scenarios is what previously prevented conversational applications from gaining traction. Over the years, we’ve become used to rigid graphical user interface elements such as command buttons and dropdown menus that prevent users from stepping out the boundaries of the application’s predefined set of commands.

How machine learning and NLP solve the problem

artificial-intelligence-2228610_1920

NLP uses machine learning and deep learning algorithms to analyze human language in a smart way. Machine learning doesn’t work with predefined rules. Instead, it learns by example. In the case of NLP, machine learning algorithms train on thousands and millions of text samples, word, sentences and paragraphs, which have been labeled by humans. By studying those examples, it gains a general understanding of the context of human language and uses that knowledge to parse future excerpts of text.

This model makes it possible for NLP software to understand the meaning of various nuances of human language without requiring to be explicitly told. With enough training, NLP algorithms can also understand the broader meaning of human-spoken or -written language.

For instance, based on the context of a conversation, NLP can determine if the word “cloud” is a reference to cloud computing or the mass of condensed water vapor floating in the sky. It might also be able to understand intent and emotion, such as whether you’re asking a question out of frustration, confusion or irritation.

What are the uses of NLP?

Digital assistants are just one of the many use cases of NLP. Another is the database querying example that we saw at the beginning of the article. But there are many other places where NLP is helping augment human efforts.

An example is IBM Watson for Cybersecurity. Watson uses NLP to read thousands of cybersecurity articles, whitepapers, and studies every month, more than any human expert could possibly study. It uses the insights it gleans from the unstructured information to learn about new threats and protect its customers against them.

We also saw the power of NLP behind the sudden leap that Google’s translation service took in 2016.

Some other use cases include summarizing blocks of text and automatically generating tags and related posts for articles. Some companies are using NLP-powered software to do sentiment-analysis of online content and social media posts to understand how people are reacting to their products and services.

Another domain where NLP is making inroads is chatbots, which are now accomplishing things that ELIZA wasn’t able to do. We’re seeing NLP-powered chatbots in fields such as healthcare, where they can question patients and run basic diagnoses like real doctors. In education, they’re providing students with on-demand online tutors that can help them through an easy-to-use, conversational interface whenever they need them

In businesses, customer service chatbots use the technology to understand and respond to trivial customer queries and leave human employees to focus their attention on taking care of follow ups and more complicated problems.

Creating output that looks human-made with NLG

Artificial intelligence machine learning

The flip side of the NLP coin is NLG. According to Gartner, “Whereas NLP is focused on deriving analytic insights from textual data, NLG is used to synthesize textual content by combining analytic output with contextualized narratives.”

In other words, if NLP enables software to read human language and convert it to computer-understandable data, NLG enables it to convert computer-generated data into human-understandable text.

You can see NLG in power in a feature Gmail added a couple of years ago, which creates automatic answers for your letters using your own style. Another interesting use of NLG is creating reports from complex data. For instance, NLG algorithms can create narrative descriptions of company data and charts. This can be helpful data analysts that have to spend considerable time creating meaningful reports of all the data they analyze for executives.

The road ahead

In the beginning, there was a huge technical gap between humans and computers. That gap is fast closing, thanks in part to NLP and NLG and other AI-related technologies. We’re becoming more and more used to talking to our computers as if they were a real assistant or a friend back from the dead.

What happens next? Maybe NLP and NLG will remain focused on fulfilling more and more utilitarian use cases. Or maybe they’ll lead us toward real, Turing-complete machines that might deceive humans into loving them. Whatever the case, exciting times are ahead.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.

https://phys.org/news/2020-06-crispr-cas9-dna-scissors.html

Comparing 13 different CRISPR-Cas9 DNA scissors

by Institute for Basic Science

Comparing 13 different CRISPR-Cas9 DNA scissors
Figure 1. Schematics of the CRISPR-Cas9 system. A guide RNA guides Cas9 to the target DNA sequence, which is followed by the short protospacer adjacent motif (PAM). Researchers across the globe have been adopting this technology to cut DNA at desired positions. Credit: Kim, H., & Kim, J. S. Nature Reviews Genetics, 2014

CRISPR-Cas9 has become one of the most convenient and effective biotechnology tools used to cut specific DNA sequences. Starting from Streptococcus pyogenes Cas9 (SpCas9), a multitude of variants have been engineered and employed for experiments worldwide. Although all these systems are targeting and cleaving a specific DNA sequence, they also exhibit relatively high off-target activities with potentially harmful effects.

Led by Professor Hyongbum Henry Kim, the research team of the Center for Nanomedicine, within the Institute for Basic Science (IBS, South Korea), has achieved the most extensive high-throughput analysis of CRISPR-Cas9 activities. The team developed deep-learning-based computational models that predict the activities of SpCas9 variants for different DNA sequences. Published in Nature Biotechnology, this study represents a useful guide for selecting the most appropriate SpCas9 variant.

This study surpassed all previous reports, which had evaluated only up to three Cas9 systems. IBS researchers compared 13 SpCas9 variants and defined which four-nucleotide sequences can be used as protospacer adjacent motif (PAM) – a short DNA sequence that is required for Cas9 to cut and is positioned immediately after the DNA sequence targeted for cleavage.

Additionally, they evaluated the specificity of six different high-fidelity SpCas9 variants, and found that evoCas9 has the highest specificity, while the original wild-type SpCas9 has the lowest. Although evoCas9 is very specific, it also shows low activity at many target sequences: these results imply that, depending on the DNA target sequence, other high-fidelity Cas9 variants could be preferred.

  • Comparing 13 different CRISPR-Cas9 DNA scissorsFigure 3. Comparing the specificity of the SpCas9 variants with a DNA sequence that has a single mismatch between the guide RNA and the target sequence. evoCas9 and the original SpCas9 exhibit the highest and the lowest specificity, respectively. Credit: Institute for Basic Science
  • Comparing 13 different CRISPR-Cas9 DNA scissorsFigure 2. PAM compatibilities for SpCas9 variants. (a) Darker colors indicates higher frequency of DNA cleavage. (b) Among these four variants (SpCas9, VRQR, xCas9 and SpCas9-NG), SpCas9-NG has been the traditional choice for all PAM sequences that have a guanine (G) as the second nucleotide. However, these results shows that for PAM sequences AGAG and GGCG, for example, the Cas9 variant VRQR (in blue) would be preferable. Credit: Institute for Basic Science
  • Comparing 13 different CRISPR-Cas9 DNA scissorsFigure 3. Comparing the specificity of the SpCas9 variants with a DNA sequence that has a single mismatch between the guide RNA and the target sequence. evoCas9 and the original SpCas9 exhibit the highest and the lowest specificity, respectively. Credit: Institute for Basic Science
  • Comparing 13 different CRISPR-Cas9 DNA scissorsFigure 2. PAM compatibilities for SpCas9 variants. (a) Darker colors indicates higher frequency of DNA cleavage. (b) Among these four variants (SpCas9, VRQR, xCas9 and SpCas9-NG), SpCas9-NG has been the traditional choice for all PAM sequences that have a guanine (G) as the second nucleotide. However, these results shows that for PAM sequences AGAG and GGCG, for example, the Cas9 variant VRQR (in blue) would be preferable. Credit: Institute for Basic Science

Based on these results, IBS researchers developed DeepSpCas9variants (deepcrispr.info/DeepSpCas9variants/), a computational tool to predict the activities of SpCas9 variants. By accessing this public website, users may input the desired DNA target sequence, find out the most suitable SpCas9 variant and take full advantage of the CRISPR technology.

“We began this research when we noticed the critical lack of a systematic comparison among the different SpCas9 variants,” says Kim. “Now, using DeepSpCas9variants, researchers can select the most appropriate SpCas9 variants for their own research purposes.”


Explore furtherResearch team evolves CRISPR-Cas9 nucleases with novel properties


More information: Nahye Kim et al. Prediction of the sequence-specific cleavage activity of Cas9 variants, Nature Biotechnology (2020). DOI: 10.1038/s41587-020-0537-9Journal information:Nature BiotechnologyProvided by Institute for Basic Science

https://linuxgizmos.com/raspberry-pi-like-zynq-7020-sbc-sells-for-73/

Raspberry Pi-like Zynq-7020 SBC sells for $72

Jun 25, 2020 — by Eric Brown — 6342 viewsPlease share:

Sipeed has launched a $72, open-spec “Sipeed TANG Hex” SBC that runs Linux on an FPGA-enabled Zynq-7020 with 1GB RAM, 256MB flash, 10/100 Ethernet, 4x USB 2.0 ports, and an early RPi-like 26-pin GPIO header.

Chinese vendor Sipeed, which recently launched a Sipeed MaixCube dev kit based on a Kendryte K210 RISC-V chip, has returned with a Raspberry Pi-like SBC that runs Linux on a Xilinx Zynq-7020. The Sipeed TANG Hex is also referred to as the “Lychee HEX ZYNQ7020 FPGA Development Board Raspberry Pie Edition ZEDBOARD” on AliExpress, where it is selling for $72.47, and the “Taidacent HEX ZYNQ 7020 FPGA Development Board Raspberry Pi Edition ZEDBOARD XILINX FPGA Kit” on Amazon for $124.13.


Sipeed TANG Hex, front and back
(click images to enlarge)

Other names for the board include the Lychee Sugar and the Litchi candy. The latter, however, is also the name of a compute module based on an Anlogic EG4S20.

The Raspberry Pi reference alludes to the TANG Hex SBC’s RPi-like dimensions and general layout, and perhaps also to its 26-pin GPIO, which may or may not support early Pi add-on boards. The ZedBoard mention refers to the original Zynq-7020 SBC: Avnet’s community backed ZedBoard, which otherwise has little in common with the TANG Hex.


Z-turn

The $72 price seems reasonable considering that the cheapest — and only other open-spec — Zynq-7000 SBCs we know of are MYIR’s somewhat equivalent Z-turn Board, which sells for $119 with a Zynq-7020, 1GB RAM, and 512MB NAND flash, and less feature-rich Z-turn Lite, which sells for $75 with a Zynq-7010 with a lower-end FPGA and 512MB RAM. The Zynq-7020 has 2x 667MHz Cortex-A9 cores, which are linked tightly to an Artix 7 FPGA with 85K logic cells.

The headless TANG Hex ships with 1GB LPDDR3, 256MB NAND, and a microSD slot. There is a 10/100Mbps Ethernet port (compared to GbE on the Z-turn), and 4x USB 2.0 ports. Onboard interfaces include UART, JTAG, and the 26-pin GPIO. The board is powered from a 12V/3.5A DC jack, and there is a power button and 2x user LEDs.

The CNXSoft story that alerted us to the TANG Hex quotes from the sole review on Amazon, which suggests this may be one of those “you get what you pay for” boards. The SBC seemed to work fine, but there was no power supply or cables and the only documentation was a link to schematics.

There are indeed schematics, but apparently no Linux image, and the documentation was scattered. CNXSoft says Sipeed did not return its phone calls. So this may also be one of those boards that is technically open source, but offers less free tech support than many commercial boards.


Sipeed TANG Hex (left) and ALINX 7020 (ALINX XILINX FPGA)
(click images to enlarge)

The Amazon review praised an older and much more expensive Zynq-7020 board called the Alinx 7020, which offers full documentation and standard accessories. Unlike the TANG Hex, the ALINX 7020 board has a partner page on Xilinx. It sells for $269 on AliExpress as the ALINX XILINX FPGA.

 
Further information

The Sipeed TANG Hex — feel free to insert whatever alternative name you may prefer — is available at AliExpress for $72.47 plus $1.05 shipping to the U.S. and in China at Taobao for 439 RMB ($62) plus shipping and at Amazon for $124.13 with free shipping where it’s said to be from a company or reseller called Taidacent. More information may be found on Sipeed’s wiki. We did not see it on the Sipeed website.

Related posts:

https://phys.org/news/2020-06-simulation-microscope-transistors-future.html

‘Simulation microscope’ examines transistors of the future

by Simone Ulmer, Swiss National Supercomputing Centre

"simulation microscope" examines transistors of the future
Structure of a single-gate FET with a channel made of a 2-D material. Arranged around it are a selection of 2-D materials that have been investigated. Credit: Mathieu Luisier/ETH Zürich

Since the discovery of graphene, two-dimensional materials have been the focus of materials research. Among other things, they could be used to build tiny, high-performance transistors. Researchers at ETH Zurich and EPF Lausanne have now simulated and evaluated one hundred possible materials for this purpose and discovered 13 promising candidates.

With the increasing miniaturization of electronic components, researchers are struggling with undesirable side effects: In the case of nanometer-scale transistors made of conventional materials such as silicon, quantum effects occur that impair their functionality. One of these quantum effects, for example, is additional leakage currents, i.e. currents that flow “astray” and not via the conductor provided between the source and drain contacts. It is therefore believed that Moore’s scaling law, which states that the number of integrated circuits per unit area doubles every 12-18 months, will reach its limits in the near future because of the increasing challenges associated with the miniaturization of their active components. This ultimately means that the currently manufactured silicon-based transistors—called FinFETs and equipping almost every supercomputer—can no longer be made arbitrarily smaller due to quantum effects.

Two-dimensional beacons of hope

However, a new study by researchers at ETH Zurich and EPF Lausanne shows that this problem could be overcome with new two-dimensional (2-D) materials—or at least that is what the simulations they have carried out on the “Piz Daint” supercomputer suggest.

The research group, led by Mathieu Luisier from the Institute for Integrated Systems (IIS) at ETH Zurich and Nicola Marzari from EPF Lausanne, used the research results that Marzari and his team had already achieved as the basis for their new simulations: Back in 2018, 14 years after the discovery of graphene first made it clear that two-dimensional materials could be produced, they used complex simulations on “Piz Daint” to sift through a pool of more than 100,000 materials; they extracted 1,825 promising components from which 2-D layers of material could be obtained.

The researchers selected 100 candidates from these more than 1,800 materials, each of which consists of a monolayer of atoms and could be suitable for the construction of ultra-scaled field-effect transistors (FETs). They have now investigated their properties under the “ab initio” microscope. In other words, they used the CSCS supercomputer “Piz Daint” to first determine the atomic structure of these materials using density functional theory (DFT). They then combined these calculations with a so-called Quantum Transport solver to simulate the electron and hole current flows through the virtually generated transistors. The Quantum Transport Simulator used was developed by Luisier together with another ETH research team, and the underlying method was awarded the Gordon Bell Prize in 2019.

Finding the optimal 2-D candidate

The decisive factor for the transistor’s viability is whether the current can be optimally controlled by one or several gate contact(s). Thanks to the ultra-thin nature of 2-D materials—usually thinner than a nanometer—a single gate contact can modulate the flow of electrons and hole currents, thus completely switching a transistor on and off.

“Although all 2-D materials have this property, not all of them lend themselves to logic applications,” Luisier emphasizes, “only those that have a large enough band gap between the valence band and conduction band.” Materials with a suitable band gap prevent so-called tunnel effects of the electrons and thus the leakage currents caused by them. It is precisely these materials that the researchers were looking for in their simulations.

Their aim was to find 2-D materials that can supply a current greater than 3 milliamperes per micrometer, both as n-type transistors (electron transport) and as p-type transistors (hole transport), and whose channel length can be as small as 5 nanometres without impairing the switching behavior. “Only when these conditions are met can transistors based on two-dimensional materials surpass conventional Si FinFETs,” says Luisier.

The ball is now in the experimental researchers’ court

Taking these aspects into account, the researchers identified 13 possible 2-D materials with which future transistors could be built and which could also enable the continuation of Moore’s scaling law. Some of these materials are already known, for example black phosphorus or HfS2, but Luisier emphasizes that others are completely new—compounds such as Ag2N6 or O6Sb4.

“We have created one of the largest databases of transistor materials thanks to our simulations. With these results, we hope to motivate experimentalists working with 2-D materials to exfoliate new crystals and create next-generation logic switches,” says the ETH professor. The research groups led by Luisier and Marzari work closely together at the National Centre of Competence in Research (NCCR) MARVEL and have now published their latest joint results in the journal ACS Nano. They are confident that transistors based on these new materials could replace those made of silicon or of the currently popular transition metal dichalcogenides.


Explore furtherSpin-gapless semiconductors review: Candidates for next-generation low-energy, high efficiency spintronics


More information: Cedric Klinkert et al. 2-D Materials for Ultrascaled Field-Effect Transistors: One Hundred Candidates under the Ab Initio Microscope, ACS Nano (2020). DOI: 10.1021/acsnano.0c02983Journal information:ACS NanoProvided by Swiss National Supercomputing Centre

https://medicalxpress.com/news/2020-06-recursive.html

New study examines recursive thinking

by Carnegie Mellon University

New study examines recursive thinking
US Adults, Tsimane’ Adults, Children, and Monkeys complete the Recursive sequencing task. Credit: S. Ferrigno, Harvard University

Recursion—the computational capacity to embed elements within elements of the same kind—has been lauded as the intellectual cornerstone of language, tool use and mathematics. A multi-institutional team of researchers for the first time show this ability is shared across age, species and cultural groups in a new study published in the June 26 issue of the journal Science Advances.

“Recursion is a way to organize information that allows humans to see patterns in information that are rich and complex, and perhaps beyond what other species see,” said Jessica Cantlon, the Ronald J. and Mary Ann Zdrojkowski Professor of Developmental Neuroscience at CMU and senior author on the paper. “We try to trace the origins of our complex and rich intellectual activities to something in our evolutionary past to understand what makes our thinking similar to and distinct from other species.”

The team set up a series of experiments with U.S. adults, adults from an indigenous group in Bolivia that largely lacks formal education, U.S. children and non-human primates. After training on the task, the researchers provided each group with sequences to order. They studied how each group conducted this task, either in a recursive or non-recursive way (listing) and looked to see which order they naturally chose.

The researchers found that the human participants from all age and cultural groups spontaneously ordered content from a recursive approach by building nested structures. The non-human primate subjects more commonly used a simpler listing strategy but with additional exposure began using the recursive strategy, eventually ending up in the range of performance of human children.Play00:0001:02MuteSettingsPIPEnter fullscreenPlayA Tsimane’ adult completes the recursion task. Credit: S. Ferrigno, Harvard University

“This ability to represent recursive structures is present in children as young as three years old, which suggests it is there even before they use it in language,” said Stephen Ferrigno, a post-doctoral fellow at Harvard University and first author on the paper. “We also saw this ability across people from widely different human cultures. Non-human primates also have the capacity to represent recursive sequences, given the right experience. These results dispel the long-held belief that only humans have the capacity to use this rule.”

The team found that working memory was an important factor affecting the sequencing abilities of participants. A strong correlation exists between working memory and the use of the hierarchical strategy.

New study examines recursive thinking
A U.S. adult participating in the study on recursion. Credit: Cantlon Lab

“Some of the errors were due to working memory, because participants had to remember which objects went first and relate that to other objects later in the list,” said Ferrigno. “Children and non-human primates had more errors, which may be due to lower working memory capacity.”

The authors note that this work offers a simplified version of a recursive task using visual cues. A more complex series of tasks may not yield the same results.

“There is something universal of being a human that lets our brains think this way spontaneously, but primates have the ability to learn it to some degree,” said Cantlon. “[This research] really gives us a chance to sort out the evolutionary and developmental contributions to complex thought.”


Explore furtherRecursive language and modern imagination were acquired simultaneously 70,000 years ago


More information: S. Ferrigno at Harvard University in Cambridge, MA el al., “Recursive sequence generation in monkeys, children, US adults, and native Amazonians,” Science Advances (2020). advances.sciencemag.org/lookup … .1126/sciadv.aaz1002Journal information:Science AdvancesProvided by Carnegie Mellon University58 shares

https://phys.org/news/2020-06-evolution-synapse.html

The evolution of the synapse

by John Hewitt , Phys.org

The evolution of the synapse
Credit: TheFreeDictionary.com

Among the most easily recognizable features of any nervous system is the synapse. While the question of how synapses evolved has been a longstanding mystery, it can now largely be solved. In a nutshell, it appears that the synapses between neurons evolved directly from the original cell-to-cell contacts, namely, the adherence junctions and other bonds that linked the primitive epithelial sheets of early multicellular organisms.

In other words, the story of how nervous systems originated dates to the very origins of multicellularity itself, or at least close to it. The implications of this assertion are profound. While the exact details are still murky, multicellular organisms evolved through episodes of clonal acquisition and merger. In its simplest incarnation, a dividing cell replicates its DNA and partitions its membrane in the normal way, but the daughter cells remain attached. A crude form of apparent phenotypic differentiation soon follows with the induction of some temporal asynchrony in the maturation processes of the individual cells. Some daughter cells may speed right through to normal adult form while others dawdle about permanently in some new differentiated form.

The other pathway to multicellularity is for independent cells with completely unique DNA to link together somehow. Curiously, this second mechanism persists in sundry extant aggregating organisms like slime molds and choanoflagellates. As a sister group to all metazoans, the choanoflagellates are of special interest here, not just because of their flexible shifts between unicellularity and colonialism, but because of the special cell junctions they developed to control their state and polarity. While aggregation is typically thought of as a response to starvation or stress, there is another critical component that links the behavior of these kinds of organisms more directly to nervous systems: communication.

The question of how the communication systems of neurons, namely their neurotransmitter systems, first evolved has now also been largely answered. In a recent sweeping review for Current Biology, Detlev Arendt of Heidelberg, Germany, details the full genesis of the actual hardware of both the pre- and postsynapse. The paper lays down the order in which the canonical small molecule transmitter systems of life sequentially made their first appearance on the stage, and chronicles the novel deployment and proliferation of the associated structural matrix, scaffolding and adhesion molecules that made it all happen.

On the presynaptic side, Detlev subdivides things into three main parts, the active zone vesicle release machinery, the voltage-dependant calcium channels that convert the spike to a calcium transient, and the SNAP/Snare/synaptotagmin complexes that translate the calcium signal into vesicle fusion. In the evolutionary expansion of the synaptotagmins in particular, there seems to have been an important early role in the calcium-regulated transport of glycerolipids between membranes. Once in the form of vesicles, these membrane parcels inevitably also contained hydrophilic proteins, and presumably also soluble peptides esconced or otherwise imported into their interior.

The evolutionary record also shows a suite of vesicular purine nucleotide transporters were present early on in life. With the addition of appropriate receptors to the mix, the makings of crude peptide and nucleotide transmitter systems were already in hand before animals appeared. In choanoflagellates, for example, secretory vesicles bud off the trans-Golgi network to fuse with the apical plasma membrane. A nearly complete calcium-sensitive presynaptic machinery is also active in dense core vesicle secretion in many non-neural secretory cell types.

Aside from the peptide and nucleotide transmitters, the first of the modern amino acid (or amino acid-derived) neurotransmitters appears to be glutamate. Vesicular glutamate transporters in the family SLC17A6-8 (Solute Carrier) are present across all the animals including early-branching sponges and placozoans. The first vesicular inhibitory amino acid transporter (VIAAT-SLC32A1) appears to be that for the uptake of GABA and glycine, as it was present in the cnidarian-bilaterian ancestor. The next to evolve was the vesicular acetylcholine transporter (VAChT-SLC18B3), which has only been found in bilaterians. This gives a rough initial transmitter chronology as follows:

ATP > glutamate > GABA/glycine > acetylcholine

On the postsynaptic side, Detlev notes that the modular structure tends to be much more variable than that of the presynapse. The glutamatergic postsynapse, with its elaborate spine formation, tends to be more complex than the cholinergic postsynapse, although both differ in important ways from the inhibitory postsynapse. For example, receptors directly bind to the scaffolding proteins built around the postsynaptic, density-specific protein Shank in the two excitatory but not inhibitory synapses. The nicotinic acetylcholine receptor family are ligand-gated pentameric ion channels and are distantly related to GABA-A and glycine receptors.

In glutamate postsynapses, the Shank protein connects to the actin skeleton via cortactin, and forms a mesh-like matrix with a scaffolding protein known as Homer. At some point in evolution, it appears that the early glutamatergic postsynapse incorporated an ancient filopodial outgrowth module based on another scaffolding protein known as IRSp53 to establish contact with a presynaptic cell. This union then produced the standard dendritic spine morphology we still see today in many excitatory neurons.

Much of the evidence for this joint venture comes from the IRSp53-containing filopodial-like structures of the so-called microvillar collar found in the apical region of choanoflagellates and sponge choanocytes. In fact, a relic microvillar collar is still critically retained in metazoans in many sensory and secretory epithelial cell types. For example, stable IRSp53-positive microvilli form the business ends of our hair cells, and also that of the spinal-fluid-contacting cells found deep within our cerebral ventricles.

The later-evolving cholinergic postsynapse appears to use the same glutamatergic-style Shank scaffold, unlike the Homer and IRSp53 modules. The inhibitory GABA and glycine postsynapses, on the other hand, do not anchor their ionotropic receptors to each other or the cytoskeleton with Shank, Homer or IRSp53. Instead, they deploy another intriguing molecule known as gephyrin. A curious thing about gephyrin is that it moonlights in another critical, if sometimes enigmatic role: It sits at the apex of a complex molybdenum cofactor synthesis chain, and pops a single Mo ion into the molybdopterin backbone ultimately used in least four human enzymes.

I asked Detlev how this ancient, widely expressed, critical molybdenum synthesis protein might have acquired its synaptic side hussle, and he said, “That’s the million dollar question for us, now.” Perhaps the clues will soon come from the same kinds of phylogenetic inspection as above. In looking more closely at the many cadherin and integrin proteins that anchor things together at the synapse, it is just now possible to see directly how the rigid apical-to-basal vertical ordering of the specific adherence, occluding and septate junction subtypes found in the most primitive creatures topologically unfolded themselves into linear, polarized, axo-dendritic domains that fix our own nervous systems today.

Perhaps this sentiment is nowhere more poignant than in the exquisitely structured paranode borders of nodes of Ranvier, which serves to facilitate the rapid conduction of nerve impulses in myelinated axon segments. Strictly speaking, tight junctions emerge first in chordates and are located apical to adherens junctions, while the more elusive separate junctions appear on the sides. The funny thing here is that septate junctions have only been unambiguously reported in invertebrates, with the exception of just one place —the vertebrate paranodes.


Explore furtherRemake, refill, reuse: recycling at the synapse revealed


More information: Detlev Arendt. The Evolutionary Assembly of Neuronal Machinery, Current Biology (2020). DOI: 10.1016/j.cub.2020.04.008Journal information:Current Biology

2020 Science X Network

https://www.forbes.com/sites/tonyewing/2020/06/26/5-incredibly-cool-habits-that-make-you-appear-smarter-than-you-already-are/#3ade8892a235


5 Incredibly Cool Habits That Make You Appear Smarter Than You Already Are

Tony EwingContributorLeadership StrategyI write about risk-taking, disruption and the behavioral science of leadership.

Woman stands calm in front of math chalkboard
Whether you’re smart or not, a few simple habits will make your life a bit easier. GETTY

One of my favorite behavioral science findings is that risk-takers tend to be smarter than your average Joe. I love taking risk and consider myself a risk-taker. So I’d be lying if I said my ego wasn’t stroked a bit by imagining people think I’m smarter than I am because I text while driving. And even if I’m not so smart, what’s the harm in appearing smarter than I am?

Very little, it turns out.

When you exude smartness, for example, people tend to give you things for free—such as benefit of the doubt. Thus, in conversations about places and concepts you should know about, but don’t, they take your silence and head nods as deep wisdom. And in situations like job interviews—where others get ripped apart with tough questions—you get a more minor grilling because no inquisitor wants to be shut down and embarrassed.

More seriously, career success, social success and general feelings of well being all relate to how smart others perceive us to be—up to the point of appearing nerdy and antisocial.

Thus, all in all, it’s a pretty big deal to appear smart—even if you’re not. And what’s even better, appearing smarter than you are is an art anyone can learn. Yet, for that purpose, you must appeal to behavioral science. Otherwise, following the advice of random thought leaders will have you looking less like a smart person and more like a game show contestant.

In that connection, here are a few incredibly cool habits you can cultivate that will make you appear smarter:Most Popular In: Leadership Strategy

  1. Avoid asserting recent news events or happenings are reversals of a trend (i.e., “End-Point” Bias). Many people confuse alarming numbers or statistics with the reversal of a trend. For example, if crime has been decreasing for years, a recent spate of robberies will lead them to say, “Things are getting worse.” This is known as end-point bias and smart people don’t fall for it very often. They realize the difference between a given context and the bigger picture. And similarly, if you think before you speak (which is par for the course in intelligence), then you might catch yourself before falling for this bias. Furthermore, if someone else falls for end-point bias during conversation and you notice it, it’s a golden opportunity. You can politely say something like, “I was thinking that too, but I was surprised to discover the trend has been the same…” you come across as both smarter and more emotionally. The habit to cultivate here involves not letting emotion get the best of you. When recent news events fuel an emotional conversation, it could short-cut your thinking.
  2. Learn to communicate your ideas with your voice. Many of us assume we appear smarter in written form, largely because we can correct stufid merrors* without anyone noticing. But researchers at Berkeley’s Business School have found even recruiters from elite employers are infuenced by hearing a job candidate’s voice. Indeed, job candidates seem smarter, on average, if merely if they vocally describe themselves. The key insight here is that our mouths convey our emotions, thought patterns and intelligence. Our writing doesn’t. Naturally this means many of us will need to up our elocution game. We’ll need to learn to organize our thoughts and practice expositing them in a logical order while avoiding words that make us look stupid. Moreover, if need be, we’ll need to prepare to defend those ideas. Finding a good friend to act as a sounding board for all this can help in cultivating this habit. Just make sure it’s a smart friend.
  3. Don’t make assumptions—just ask. On one level, it’s always good to mentally place yourself in another person’s shoes before making making judgments or criticisms. For example, suppose you enter some bureaucratic office and the receptionist is rude and dismissive. It helps beforehand to imagine he or she could be a bit frustrated by the long line of other people. However, it would be a mistake to automatically assume you know why he or she is abrupt. According to behavioral science, making too many assumptions makes you appear egoistic. That makes you appear the opposite of smart. A better approach, according to researchers from The University of Chicago, is to just ask. You might do this by opening with body language and a sigh that mimic the person’s apparent disposition. Then say, “I’m having a rough one. How about you?” While that seems only sociable (and even a bit manipulative, I admit) it actually makes you appear smarter. And in some ways, you really are. Cultivating the habit of asking people good (i.e., relevant) questions is about as smart as one can be. And that’s actually the next habit on our list…
  4. Learn to ask good questions. Ever met someone whose questions sound contrived? It’s as though they read a book and are shooting inane queries at you to seem more interesting. They would, if those questions actually were interesting. In fact, some Harvard Business School researchers have found asking the right questions makes you appear smarter and more interesting. But what exactly are the “right” questions? Questions that seek advice. Coming off with an ego brings you down a notch or two on the smartness index in the eyes of observers. Yet, fanning the ego of some listeners by asking for their advice does the opposite. The habit to cultivate here is obvious: Humble yourself enough to ask others for advice in areas you suspect they know more about. You’ll look smarter and get a good tip at the same time.
  5. Value “stuff” less and ideas and self-improvement more. Suppose your colleagues are sitting around talking about all the stuff they have. One just bought a new watch. Another got a great deal on a flat screen. Someone else bought a new car. A smart person would indulge her colleagues and share in their joy—at first. But after a few minutes, she’d elevate the conversation away from those physical sorts of things to ideas. The reason is not so obvious. From a behavioral perspective, talking about what we have makes us seem slaves to the temporary in the eyes of others. Rather, smarter people subconsciously viewed as having greater mastery of things that are more permanent—like ideas. Moreover, the more temporary things are (say, a great meal you had last night) the more simpleton-like you sound. So scientists from the University of New Hamshire have found elevating the conversation is the way out. The habit to cultivate here is using the conversation’s topic as a springboard to a bigger, self-improving idea. A watch discussion turns to a discussion about time you’d love to spend, if you had it, with the kids. A flat screen turns to a discussion of how nice it is to watch a movie with a significant other. And a story about a new car, well, naturally that should turn to how the best car ever made was the 1967 Shelby Cobra Mark III, with a 427 V8 slapped on its side. (Okay, we all have our weaknesses!) But back to the point. The habit to cultivate here is to gently reorient the conversation. Over time, people will come to see you as a smarter, improving idea person.

Of course, there are also many cheesy ways to look smarter. Yet, it’s hard to pull these off without look pretentious. We can use simple words when speaking. But unless you’re Hemingway, it’s hard simplify your vocabulary to the point where few words come across as having deep meaning. Simplify, yes, but don’t start sounding like a faux philosopher.

And you could start wearing glasses, smiling less and quitting vodka, but that turns you into an actor at the great risk of looking artificial. Instead the above habits might prove easier, more natural and more convincing. They’ll at least complement the intellect you already possess. And, there’s no harm in that!Follow me on Twitter or LinkedIn. Check out my website

*For those that have pointed out this spelling error. Thanks; however, it was done on purpose.Tony Ewing

An entrepreneur, former banker and student of Nobel Laureates John Nash and Daniel Kahneman, I run an exciting and unique business that uses bespoke, behavioral analytics