http://www.extremetech.com/gaming/223567-amd-clobbers-nvidia-in-updated-ashes-of-the-singularity-directx-12-benchmark

AMD clobbers Nvidia in updated Ashes of the Singularity DirectX 12 benchmark

Roughly six months ago, we covered the debut of Ashes of the Singularity, the first DirectX 12 title to launch in any form. With just a month to go before the game launches, the developer, Oxide, has released a major new build with a heavily updatedbenchmark that’s designed to mimic final gameplay, with updated assets, new sequences, and all of the enhancements to the Nitrous Engine Oxide has baked in since last summer.

Ashes of the Singularity is a spiritual successor to games like Total Annihilation, and the first DirectX 12 title to showcase AMD and Nvidia GPUs working side-by-side in a multi-GPU configuration.

The new build of the game released to press now allows for multi-GPU configuration testing, but time constraints limited us to evaluating general performance on single-GPU configurations. With Ashes launching in just under a month, the data we see today should be fairly representative of final gameplay.

AMD, Nvidia, and asynchronous compute

Ashes of the Singularity isn’t just the first DirectX 12 game — it’s also the first PC title to make extensive use of asynchronous computing. Support for this capability is a major difference between AMD and Nvidia hardware, and it has a significant impact on game performance.

A GPU that supports asynchronous compute can use multiple command queues and execute these queues simultaneously, rather than switching between graphics and compute workloads. AMD supports this functionality via its Asynchronous Compute Engines (ACE) and HWS blocks on Fiji.

AMD-HWS

Fiji’s architecture. The HWS blocks are visible at top

Asynchronous computing is, in a very real sense, GCN’s secret weapon. While every GCN-class GPU since the original HD 7970 can use it, AMD quadrupled the number of ACEs per GPU when it built Hawaii, then modified the design again with Fiji. Where the R9 290 and 290X use eight ACEs, Fiji has four ACEs and two HWS units. Each HWS can perform the work of two ACEs and they appear to be capable of additional (but as-yet unknown) work as well.

The exact state and nature of Nvidia’s asynchronous compute capabilities is still unclear. We know that Nvidia’s Maxwell can’t perform anything like the concurrent execution that AMD GPUs can manage. Maxwell can benefit from some light asynchronous compute workloads, as it does in Fable, but the benefits on Team Green hardware are small.

NV-Preemption

Nvidia’s async compute support is limited compared to AMD

The Nitrous Engine that powers Ashes of the Singularity makes extensive use of asynchronous compute and uses it for up to 30% of a given frame’s workload. Oxide has stated that they believe this will be a common approach in future games and game engines, since DirectX 12 encourages the use of multiple engines to execute commands from separate queues in parallel.

Test setup and performance:

We tested both the AMD Fury X and the Nvidia GeForce GTX 980 Ti in a Haswell-E system with 16GB of DDR4-2667 and Windows 10 with all updates installed. AMD distributed a new driver for this review, Nvidia did not — we used the WHQL 361.91 driver, released on 2/16/2016 for our performance testing.

We confined ourselves to DirectX 12 testing this time out, but Anandtech did cover DX11. The performance data there suggests that both AMD and Nvidia improved in all modes. Nvidia continues to outperform AMD in DX11 compared to DX12, but the gap is much smaller than it was previously. As before, however, DirectX 12 gives Nvidia no performance improvement over and above DX11.

We tested Ashes of the Singularity in three detail modes — High, Extreme, and Crazy and with asynchronous computing enabled and disabled to measure the impact on AMD versus Nvidia cards. The feature is enabled by default.

We’re going to show you results first with asynchronous compute enabled versus disabled, then by resolution.

Ashes-1080p-AsyncAshes-1080p-NoAsync

With asynchronous compute disabled, AMD’s R9 Fury X leads the GTX 980 Ti by 7-8% across all three detail levels. Enable asynchronous compute, however, and AMD roars ahead, beating its Nvidia counterpart by 24-28%. The GeForce GTX 980 Ti’s performance, in contrast, drops by 5-8% if asynchronous compute is enabled. This accounts for some of the gap between the two manufacturers, but by no means all of it.

Let’s shift to 4K and check performance there:

Ashes-4K-Async

Ashes-4K-NoAsync

Higher resolutions have often favored AMD cards, and this is no exception. With asynchronous compute disabled, AMD GPUs are still running 11-15% faster than their Nvidia counterparts. Enable async compute, and that gap doubles — the Radeon R9 Fury X is no less than 31-33% faster than the Nvidia GTX 980 Ti. Given how the Fury X struggled out of the gate, that’s got to be a welcome sight for Team Red.

Is Ashes of the Singularity biased?

Ashes of the Singularity is the first DX12 game on the market, and the performance delta between AMD and Nvidia is going to court controversy from fans of both companies. We won’t know if its performance results are typical until we see more games in market. But is the game intrinsically biased to favor AMD? I think not — for multiple interlocking reasons.

First, there’s the fact that Oxide shares its engine source code with both AMD and Nvidia and has invited both companies to both see and suggest changes for most of the time Ashes has been in development. The company’s Reviewer’s Guide includes the following:

[W]e have created a special branch where not only can vendors see our source code, but they can even submit proposed changes. That is, if they want to suggest a change our branch gives them permission to do so…

This branch is synchronized directly from our main branch so it’s usually less than a week from our very latest internal main software development branch. IHVs are free to make their own builds, or test the intermediate drops that we give our QA.

Oxide also addresses the question of whether or not it optimizes for specific engines or graphics architectures directly.

Oxide primarily optimizes at an algorithmic level, not for any specific hardware. We also take care to avoid the proverbial known “glass jaws” which every hardware has. However, we do not write our code or tune for any specific GPU in mind. We find this is simply too time consuming, and we must run on a wide variety of GPUs. We believe our code is very typical of a reasonably optimized PC game.

We reached out to Dan Baker of Oxide regarding the decision to turn asynchronous compute on by default for both companies and were told the following:

“Async compute is enabled by default for all GPUs. We do not want to influence testing results by having different default setting by IHV, we recommend testing both ways, with and without async compute enabled. Oxide will choose the fastest method to default based on what is available to the public at ship time.”

Second, we know that asynchronous compute takes advantages of hardware capabilities AMD has been building into its GPUs for a very long time. The HD 7970 was AMD’s first card with an asynchronous compute engine and it launched in 2012. You could even argue that devoting die space and engineering effort to a feature that wouldn’t be useful for four years was a bad idea, not a good one. AMD has consistently said that some of the benefits of older cards would appear in DX12, and that appears to be what’s happening.

Asynchronous computing is not itself part of the DX12 specification, but it’s one method of implementing a DirectX 12 multi-engine. Multi-engines are explicitly part of the DX12 specification. How these engines are implemented may well impact relative performance between AMD and Nvidia, but they’re one of the advantages to using DX12 as compared with previous APIs.

AMD-vs-NV async

Table and paper by Ext3h

Third, every bit of independent research on this topic has confirmed that AMD and Nvidia have profoundly different asynchronous compute capabilities. Nvidia’s own slides illustrate this as well. Nvidia cards cannot handle asynchronous workloads the way that AMD’s can, and the differences between how the two cards function when presented with these tasks can’t be bridged with a few quick driver optimizations or code tweaks. Beyond3D forum member and GPU programmer Ext3h has written a guide to the differences between the two platforms — it’s a work-in-progress, but it contains a significant amount of useful information.

Fourth, Nvidia PR has been silent on this topic. Questions about Maxwell and asynchronous compute have been bubbling for months; we’ve requested additional information on several occasions. Nvidia is historically quick to respond to either incorrect information or misunderstandings, often by making highly placed engineers or company personnel available for interview. The company has a well-deserved reputation for being proactive in these matters, but we’ve heard nothing through official channels.

Fifth and finally, we know that AMD GPUs have always had enormous GPU compute capabilities. Those capabilities haven’t always been displayed to their best advantage for a variety of reasons, but they’ve always existed, waiting to be tapped. When Nvidia designed Maxwell, it prioritized rendering performance — there’s a reason why the company’s highest-end Tesla SKUs are still based on Kepler (aka the GTX 780 Ti / Titan Black).

It’s fair to say that the Nitrous Engine’s design runs better on AMD hardware — but there’s no proof that the engine was designed to disadvantage Nvidia hardware, or to prevent Nvidia cards from executing workloads effectively.

Conclusion

Ashes of the Singularity launches in a month. It’s going to be a major DX12 data point for several years, at least, and we don’t yet know if the shift to that API means that more engines will move to using asynchronous compute or not. It’s certainly possible, particularly given that both the Xbox One and PS4 can make use of asynchronous compute already.

For now, we recommend treating these results as an interesting example of how a new API can open up performance capabilities and breathe new life into older hardware. While time constraints prevented us from testing older AMD or NV cards, data we’ve seen suggests that AMD GPUs see advantages from async compute across the company’s entire product stack. It’s not a miracle cure for an otherwise-slow card, but it gives a solid benefit.

If you already own a GeForce card, we still recommend waiting before rushing out to buy new hardware. Both AMD and Nvidia have 14nm refreshes coming this year, and relative rankings could change depending on the architectures of the new cards. For now, however, AMD seems to be gaining more from the DX12 shift than Nvidia is — the Fury X is an absolute titan in Ashes of the Singularity.

http://www.kurzweilai.net/real-or-computer-generated-can-you-tell-the-difference

Real or computer-generated: can you tell the difference?

Training helps humans tell them apart … but soon, only computers will know what’s real or not
February 22, 2016

Which of these are photos vs. computer-generated images? (credit: Olivia Holmes et al./ACM Transactions on Applied Perception) (credit: Olivia Holmes et al./ACM Transactions on Applied Perception)

As computer-generated characters become increasingly photorealistic, people are finding it harder to distinguish between real and computer-generated, a Dartmouth College-led study has found.

This has introduced complex forensic and legal issues*, such as how to distinguish between computer-generated and photographic images of child pornography, says Hany Farid, a professor of computer science, pioneering researcher in digital forensics at Dartmouth, and senior author of a paper in the journal ACM Transactions on Applied Perception.

“This can be problematic when a photograph is introduced into a court of law and the jury has to assess its authenticity,” Farid says.

Training helps … for now

In their study, Farid’s team conducted perceptual experiments in which 60 high-quality computer-generated and photographic images of men’s and women’s faces were shown to 250 observers. Each observer was asked to classify each image as either computer generated or photographic. Observers correctly classified photographic images 92 percent of the time, but correctly classified computer-generated images only 60 percent of the time.

The top row images are all computer-generated, paired here with their photographic matches below (credit: Olivia Holmes et al./ACM Transactions on Applied Perception)

But in a follow-up experiment, when the researchers provided a second set of observers with some training before the experiment, their accuracy on classifying photographic images fell slightly to 85 percent but their accuracy on computer-generated images jumped to 76 percent.

With or without training, observers performed much worse than Farid’s team observed five years ago in a study when computer-generated imagery was not as photorealistic.

When humans can no longer judge what’s real

“We expect that human observers will be able to continue to perform this task for a few years to come, but eventually we will have to refine existing techniques and develop new computational methods that can detect fine-grained image details that may not be identifiable by the human visual system,” says Farid.

The study, which included Dartmouth student Olivia Holmes and Professor Martin Banks at the University of California, Berkeley, was supported by the National Science Foundation.

* Legal background

  • In 1996, Congress passed the Child Pornography Prevention Act (CPPA), which made illegal “any visual depiction including any photograph, film, video, picture or computer-generated image that is, or appears to be, of a minor engaging in sexually explicit conduct.”
  • In 2002, the U.S. Supreme Court ruled that the CPPA infringed on the First Amendment and classified computer-generated child pornography as protected speech. As a result, defense attorneys need only claim their client’s images of child pornography are computer generated.
  • In 2003, Congress passed the PROTECT Act, which classified computer generated child pornography as “obscene,” but this law didn’t eliminate the so-called “virtual defense” because juries are reluctant to send a defendant to prison for merely possessing computer-generated imagery when no real child was harmed.

Abstract of Assessing and Improving the Identification of Computer-Generated Portraits

Modern computer graphics are capable of generating highly photorealistic images. Although this can be considered a success for the computer graphics community, it has given rise to complex forensic and legal issues. A compelling example comes from the need to distinguish between computer-generated and photographic images as it pertains to the legality and prosecution of child pornography in the United States. We performed psychophysical experiments to determine the accuracy with which observers are capable of distinguishing computer-generated from photographic images. We find that observers have considerable difficulty performing this task—more difficulty than we observed 5 years ago when computer-generated imagery was not as photorealistic. We also find that observers are more likely to report that an image is photographic rather than computer generated, and that resolution has surprisingly little effect on performance. Finally, we find that a small amount of training greatly improves accuracy.

http://www.kurzweilai.net/cancer-in-3-d

Cancer cells in 3D

What researchers miss on glass slides
February 22, 2016
[+]

A spheroid of many lung cancer cells illustrates a diversity of behaviors. (credit: Welf and Driscoll et al./Developmental Cell)

Cancer cells don’t live on glass slides. Yet the vast majority of images related to cancer biology come from the cells being photographed on flat, two-dimensional surfaces — images sometimes used to draw conclusions about the behavior of cells that normally reside in a more complex environment.

Now a new high-resolution microscope, presented (open access) February 22 in Developmental Cell, makes it possible to visualize cancer cells in 3D and record how they are signaling to other parts of their environment — revealing previously unappreciated biology of how cancer cells survive and disperse within living things. Based on ”microenvironmental selective plane illumination microscopy” (meSPIM),  the new microscope is designed to image cells in microenvironments free of hard surfaces near the sample.

“There is clear evidence that the environment strongly affects cellular behavior — thus, the value of cell culture experiments on glass must at least be questioned,” says senior author Reto Fiolka, an optical scientist at the University of Texas Southwestern Medical Center. “Our microscope is one tool that may bring us a deeper understanding of the molecular mechanisms that drive cancer cell behavior, since it enables high-resolution imaging in more realistic tumor environments.”

[+]

This image shows the extracted surfaces of two cancer cells. (Left) A lung cancer cell colored by actin intensity near the cell surface. Actin is a structural molecule that is integral to cell movement. (Right) A melanoma cell colored by PI3-kinase activity near the cell surface. PI3K is a signaling molecule that is key to many cell processes. (credit: Welf and Driscoll et al./Developmental Cell)

Hidden protrusions from cancer cells

In their study, Fiolka and colleagues, including co-senior author Gaudenz Danuser, and co-first authors Meghan Driscoll and Erik Welf, also of UT Southwestern, used their microscope to image different kinds of skin cancer cells from patients. They found that in a 3D environment (where cells normally reside), unlike a glass slide, multiple melanoma cell lines and primary melanoma cells (from patients with varied genetic mutations) form many small protrusions called blebs.

One hypothesis is that this blebbing may help the cancer cells survive or move around and could thus play a role in skin cancer cell invasiveness or drug resistance in patients.

[+]

This is a melanoma cell (red) embedded in a 3-D collagen matrix (white). A 100 x 100 x 100 μm cube is shown, with one corner cut away to show the interaction of the cell with the collagen. (credit: Welf and Driscoll et al./Developmental Cell)

The researchers say that this is a first step toward understanding 3D biology in tumor microenvironments. But since these kinds of images may be too complicated to interpret by the naked eye alone, the next step will be to develop powerful computer platforms to extract and process the information.

The microscope control software and image analytical code are freely available to the scientific community.

The authors were supported by the Cancer Prevention Research Institute of Texas and the National Institutes of Health.


Abstract of Quantitative Multiscale Cell Imaging in Controlled 3D Microenvironments

The microenvironment determines cell behavior, but the underlying molecular mechanisms are poorly understood because quantitative studies of cell signaling and behavior have been challenging due to insufficient spatial and/or temporal resolution and limitations on microenvironmental control. Here we introduce microenvironmental selective plane illumination microscopy (meSPIM) for imaging and quantification of intracellular signaling and submicrometer cellular structures as well as large-scale cell morphological and environmental features. We demonstrate the utility of this approach by showing that the mechanical properties of the microenvironment regulate the transition of melanoma cells from actin-driven protrusion to blebbing, and we present tools to quantify how cells manipulate individual collagen fibers. We leverage the nearly isotropic resolution of meSPIM to quantify the local concentration of actin and phosphatidylinositol 3-kinase signaling on the surfaces of cells deep within 3D collagen matrices and track the many small membrane protrusions that appear in these more physiologically relevant environments.

http://news.ubc.ca/2016/02/23/friends-matter-babies-use-group-size-to-determine-social-dominance/

Friends matter: Babies use group size to determine social dominance

babies_770

Infants as young as six months understand social dominance. Photo: Flickr

A new study out of UBC’s department of psychology finds that infants as young as six months figure out that a person with more friends will be more dominant than someone with fewer companions. In this Q&A, Anthea Pun, the study’s lead author and a graduate student, discusses her research and explores how social dominance asserts itself – even among the very young.

What’s the relationship between babies and social dominance?

Babies are able to reason about complex social and moral concepts within the first few months of life. Since babies this young haven’t had much social experience, infant research helps us understand the kinds of human competencies and abilities that may have a longer evolutionary history, and thus were likely important for survival.

By six months of age, babies already expect that an individual with more friends should prevail over an individual with fewer friends. This suggests that babies understand the importance of having more people to “back you up” in a competition. This is important for babies to understand because they are very vulnerable and rely on the protection of others. They must quickly learn whom to trust, and who will support them in times of need.

What does your research involve?

Past work has shown that babies as young as 10 months think that physically bigger individuals should get their way in a competition against smaller individuals. But younger babies – eight to nine months old – don’t necessarily think so. We wondered whether younger babies just aren’t old enough to understand such complex social relationships, or if there are other social cues they are more aware of, such as group size.

Like many social species that rely on group members to help them out in times of need, we wondered whether young babies might also begin to understand that having more friends can bring a competitive advantage. Our lab is currently exploring whether infants prefer to be part of larger groups, and whether they may punish group members who do not come to the aid of friends in need.

What did the results show?

That babies expect characters from larger groups to get their way, and thus are more dominant compared to individuals from smaller groups.

In many social species, being part of a group provides many benefits, such as access to desirable resources and protection from opposing groups. Therefore, even if you are the smallest person in the group, you benefit from teaming up with others in your camp – giving you strength in numbers. This ability to detect when a group is larger (or smaller) than one’s own group has emerged in a variety of social species, and may reflect an ancient capacity to represent dominance relationships that babies are pre-prepared to learn.

The paper, “Infants Use Relative Numerical Group Size to Infer Social Dominance,” is published in PNAS Early Edition. The study’s co-authors include Susan Birch and Andrew Baron of UBC’s department of psychology.

http://www.ndtv.com/health/low-voltage-electric-currents-may-treat-adults-with-lazy-eye-1280483

Low Voltage Electric Currents May Treat Adults With Lazy Eye

http://canadafreepress.com/article/statistical-forecasting-how-fast-will-future-warming-be

Statistical Forecasting: How Fast Will Future Warming Be?


Guest Column imageBy Guest Column Dr. Benny Peiser — Bio and Archives  February 23, 2016

A new paper published today by the Global Warming Policy Foundation explains how statistical forecasting methods can provide an important contrast to climate model-based predictions of future global warming. The repeated failures of economic models to generate accurate predictions has taught many economists a healthy scepticism about the ability of their own models, regardless of how complex, to provide reliable forecasts.

Statistical forecasting has proven in many cases to be a superior alternative. Like the economy, the climate is a deeply complex system that defies simple representation. Climate modelling thus faces similar problems.—Global Warming Policy Foundation, 23 February 2016

The global average temperature is likely to remain unchanged by the end of the century, contrary to predictions by climate scientists that it could rise by more than 4C, according to a leading statistician. British winters will be slightly warmer but there will be no change in summer, Terence Mills, Professor of Applied Statistics at Loughborough University, said in a paper published by the Global Warming Policy Foundation. He found that the average temperature had fluctuated over the past 160 years, with long periods of cooling after decades of warming. Dr Mills said scientists who argued that global warming was an acute risk to the planet tended to focus on the period from 1975-98, when the temperature rose by about 0.5C. He said that his analysis, unlike computer models used by the IPCC to forecast climate change, did not include assumptions about the rate of warming caused by rising emissions. “It’s extremely difficult to isolate a relationship between temperatures and carbon dioxide emissions,” he said.—Ben Webster, The Times, 23 February 2016

In this insightful essay, Terence Mills explains how statistical time series forecasting methods can be applied to climatic processes. The question has direct bearing on policy issues since it provides an independent check on the climate model projections that underpin calculations of the long term social costs of greenhouse gas emissions. In this regard, his conclusion that statistical forecasting methods do not corroborate the upward trends seen in climate model projections is highly important and needs to be taken into consideration.

As one of the leading contributors to the academic literature on this subject, Professor Mills writes with great authority, yet he is able to make the technical material accessible to a wide audience.—Professor Ross McKitrick, Global Warming Policy Foundation, February 2016

The framework illustrates that unreliable climate simulations are prone to overestimate the attributable risk to climate change. Climate model ensembles tend to be overconfident in their representation of the climate variability which leads to systematic increase in the attributable risk to an extreme event.—Omar Bellprat and Francisco Doblas-Reyes,Geophysical Research Letters, 19 February 2016

The thesis caused headlines around the world: The war in Syria has been caused mainly by anthropogenic climate change, news media and politicians proclaimed. They rely on climate scientists who have published similar studies. German researchers have now published a joint statement in which they contradict the thesis. “The frequently advocated causality between drought, migration and the outbreak of conflict in Syria is simplistic and untenable,” says the German Climate Consortium, a coalition of numerous research institutes.—Alex Bojanowski, Der Spiegel, 15 February 2016

Thankfully for the Zimbabwean dictator, there are plenty of gullible Westerners willing to believe that the frighteningly vile, comically incompetent government isn’t at the root of Zimbabwe’s food shortages, but that global warming is to blame. Of course, this is pure nonsense. Botswana and Zimbabwe share a border and their climate and natural resources are exceptionally similar. Yet, since 2004, food production has increased by 29 percent in Botswana, while declining by 9 percent in Zimbabwe. It is not drought but government policies that make nations starve!—Marian L. Tupy, Foundation for Economic Education, 18 February 2016

http://www.dnaindia.com/scitech/report-zte-s-spro-plus-is-an-8-inch-tabet-with-a-laser-projector-crammed-inside-2181500

ZTE’s Spro Plus is an 8-inch tabet with a laser projector crammed inside.

The Spro Plus is expected to hit stores later this year.

Chinese manufacturer ZTE is at the Mobile World Congress in Barcelona to present its latest range of devices, including one oddity — the Spro Plus — a hybrid gadget that’s a portable projector and a touchscreen tablet all rolled into one.

As a successor to the Spro 2 — ZTE’s Android pico projector with built-in touch screen — the ZTE Spro Plus has the form factor and power of an Android tablet. Presented at the Barcelona trade fair and due to go on sale by summer 2016, the device has an 8.4-inch screen (2560×1440), 3GB of RAM, 32 to 128GB of memory (depending on the version) and a microSD card slot.The Spro Plus has a built-in laser projector with brightness up to 500 lumens, which can project an 80-inch image from a distance of 2.4 meters. The projector will be available in WiFi and 4G/WiFi models but there’s no word yet on how much this two-in-one might cost. Note that ZTE counts over 500,000 unit sales of its Spro 1 and 2 projectors.

Alongside the Spro Plus, the Chinese manufacturer unveiled its new generation of Blade smartphones — the V7 and V7 Lite — at MWC 2016. These are stylish 5.2-inch and 5-inch mid-range Android handsets, expected to land in the spring in selected countries. ZTE also presented an original accessory, in the form of a connected ring. Dubbed the iCharming, this micro wearable can be used for activity tracking or to take photos via a smartphone camera, and has an SOS function to send an emergency message to a selected contact. As yet, no official release date has been announced.

Mobile Word Congress runs February 22-25, 2016, in Barcelona.

http://www.iclarified.com:8080/54047/get-sonys-extra-bass-bluetooth-wireless-headphones-for-50-off-deal

Get Sony’s Extra Bass Bluetooth Wireless Headphones for 50% Off [Deal]

Looking for a good pair of wireless headphones? Sony’s manufacturer refurbished MDR-XB950BT/B Extra Bass Bluetooth Wireless Headphones with Microphone are currently 50% off list price on Amazon. The headphones have a rating of 4.3 stars out of 5 over 130 customer reviews.

Wireless freedom, sleek comfort, and unmistakable bass response add-up to an unforgettable audio experience. Connect via Bluetooth with NFC and let your music loose for up to 20 hours (battery life) anytime anywhere. 40mm drivers with electronic bass boost will add punch to your favorite tracks.

Get Sony's Extra Bass Bluetooth Wireless Headphones for 50% Off [Deal]

Features:
● Bluetooth audio streaming with AAC and apt-X support
● Electronic Bass Boost circuitry for added bass emphasis
● Passive mode for normal, corded operation without battery
● Comfortable around-the-ear design
● This Certified Refurbished product is manufacturer refurbished, shows limited or no wear, and includes all original accessories plus a 90-day limited hardware warranty.

Hit the link below for more details…

Read More

Get Sony's Extra Bass Bluetooth Wireless Headphones for 50% Off [Deal]

http://www.pcworld.com/article/3036649/android/google-teams-up-with-carriers-to-speed-adoption-of-rich-communication-services.html

Google teams up with carriers to speed adoption of Rich Communication Services

An Android client is in the works, which should spread adoption of this major upgrade for mobile messaging.

rcs gee
Better mobile messaging for Android is on the horizon. As part of the Mobile World Congress festivities, the GSM Association announced that it’s partnered up with Google and 15 global carriers to push adoption of Rich Communication Services (RCS).

RCS brings the features found in third-party services like Facebook Messenger, WhatsApp, Hangouts, and others to standard messaging, such as real-time typing and read notifications, support for higher-resolution images, location sharing, and emoticons. Just like with Apple’s iMessage, you won’t have to sign up for another account with a third-party service, as it will be integrated with your phone’s standard messaging.

Google has pledged to build a dedicated RCS client thanks to its acquisition of Jibe, a carrier-based messaging platform. This would enable carriers to tap into Jibe’s platform in order to deliver RCS, or they can build their own infrastructure. T-Mobilehas already rolled out some support for RCS with its own Advanced Messagingservice.

Expect to hear more about this at Google I/O, as that’s usually a prime venue for Google to announce new initiatives or expand on major projects.

Why this matters: SMS and MMS messaging can be a pain point for Android users when compared to the iPhone, which offers a zippier, although proprietary, system with iMessage (which is not without its own issues, it should be noted). RCS is exciting because it can bring this type of capability to everyone regardless of device. Though how much Apple would support this remains to be seen, as universal and open standards are not usually the company’s forte.

For comprehensive coverage of the Android ecosystem, visit Greenbot.com.

http://news.ubc.ca/2016/02/22/ubc-students-get-first-look-at-140-sq-ft-nano-suite/

UBC students get first look at 140-sq-ft “Nano” suite

nano770

Event: A full-scale mockup of a new 140-sq-ft student housing unit is on display for UBC students.
Date/Time: Monday, Feb. 22, 10 a.m. to 1 p.m
Who: Andrew Parr, Managing Director, Student Housing and Hospitality Services, will be available for media interviews at the AMS Student Nest
Location: AMS Student Nest – 6133 University Blvd, Vancouver (map)
Parking: North Parkade – 6115 Student Union Boulevard V6T 1Z1 (map)

Beginning today, UBC students will get a first look at a full-scale mockup of a 140-sq-ft student housing unit called a Nano suite. In June 2015, UBC announced the development of these smaller, less expensive student housing units to help address housing affordability and the demand for on-campus housing. A total of 70 Nano suites will be included as a pilot project in a new student residence building slated for completion in 2019.

More information on the Nano suites, as well as still images and video are available here:http://vancouver.housing.ubc.ca/rooms/nano/

UBC SHHS Nano Suite Virtual Tour from Picture and Color on Vimeo.