Different main reasons depending on whether they were office or hospital based

The analysis by researchers at Brown University and Healthcentric Advisors, novel for its relatively large sample size and its incorporation of both hospital and office-based physicians, is based on the open-ended answers that 744 doctors gave to this question on a Rhode Island Department of Health survey in 2014: “How does using an EHR affect your interaction with patients?”

The survey question was optional but hardly trivial, said study co-author Dr. Rebekah Gardner, an associate professor of medicine at the Warren Alpert Medical School and a senior medical scientist with Healthcentric Advisors. With the goal of improving the quality of care, federal “Meaningful Use” standards have vastly expanded the amount of information that doctors must capture. But the American Medical Association has raised concerns about EHR software usability, and studies have shown that the burden of meticulously filling out electronic health records is a major cause of physicians experiencing burnout, a discouraged form of emotional exhaustion and depersonalization in their work.

“Physicians who are burnt out provide lower-quality care,” Gardner said. “What this speaks to is that we, as physicians, need to demand a rethinking of how quality is measured and if we’re really getting the quality we hoped when we put in EHRs. There are unintended consequences of measuring quality as it’s currently being done.”

Gardner also cited research indicating that patients who feel their doctors don’t understand them or communicate poorly are less likely to stick with treatments and engage in follow-up visits, which can undermine their care.

In highlighting how EHRs impose different burdens on different physicians, the study in the Journal of Innovation in Health Informatics illustrates that EHRs pose a multifaceted set of problems for medical practice, she said. Even so, doctors responding to the survey also acknowledged that EHRs are both here to stay and provide important benefits, such as ready access to information.

To conduct the study, Gardner worked with lead author Kimberly Pelland of Healthcentric Advisors and Rosa Baier, associate director of the Center for Long-Term Care Quality and Innovation and an associate professor of the practice at the Brown University School of Public Health.

Reckonings on records

The study quotes survey answers that exemplified the problems and promise of EHRs. Office-based physicians typically bring their computers into the exam room, leading one doctor to worry that staring at a computer rather than the patient seemed rude: “[It’s] like having someone at the dinner table texting rather than paying attention.”

Hospital physicians, meanwhile, typically perform their record keeping outside the exam room. They find that being tethered to their computers means they can’t visit patients as much. As one such doctor put it: “I now spend much less time [with] patients because I know I have hours of data entry to complete.”

While office-based physicians mainly complained about patient interaction and hospital-based physicians primarily worried over reduced time for patients, each group’s second-most common lament was the other group’s most common. They share the same concerns, albeit in distinct orders.

A minority of physicians said EHRs did not undermine their ability to connect with patients. The study noted one whose patients happened to be newborns and another who cited employing a medical scribe to handle the data entry during office visits.

More commonly, Pelland said, the way that physicians try to minimize the impact on patient care is to shift data entry to, as one office-based doctor put it, “hours and hours of work at home.” Doctors have also begun to seek out continuing medical education on how to best integrate EHR use during patient visits to minimize disruption.

In the survey, doctors sometimes acknowledged that records can provide benefits to patient interaction. One hospital doctor praised the ease that EHRs provide in calling up a patient’s history. Some office-based physicians, meanwhile, commented on how web-based patient portals improve communication with patients. Others described how they make use of their computers to interact with patients, for instance by calling up and displaying educational illustrations of medical conditions.

Google Street View images for signs of urban change

Nikhil Naik, Scott Duke Kominers, and their collaborators are hoping to transform the way scientists study urban environments — with an assist from Google.

In joint work with Edward L. Glaeser, the Fred and Eleanor Glimp Professor of Economics at Harvard and César A. Hidalgo and Ramesh Raskar, associate professors at the MIT Media Lab, Kominers, an Associate Professor in the Entrepreneurial Management Unit at HBS and the Department of Economics and Naik, a Prize Fellow in Economics, History and Politics, authored a study that uses computer vision algorithms to examine millions of Google Street View images in an effort to measure whether and how urban areas are changing.

In addition to demonstrating the effectiveness of the technology, the study both found that two key demographic characteristics — high density and high education — play important roles in urban improvement, and showed support for three classical theories of urban change. The study is described in a July 6 paper in Proceedings of the National Academy of Sciences.

“Lots of people, including social scientists and urban planners, are interested in studying why places evolve and how much change happens in different cities,” Naik said. “But there is a lack of data on the physical aspects of urban change.”

That’s where Google Street View imagery comes in.

For the past decade, Naik said, the tech giant has collected millions of Street View images from across the country as part of its mapping service. What’s more, they keep those maps up-to-date by periodically re-photographing the same locations in major cities. Consequently, Street View contains a rich database of urban images that researchers can use to follow cities through time.

Using Street View images to track urban change isn’t a new idea, though.

In 2014, then-doctoral student Jackelyn Hwang and Robert Sampson, the Henry Ford II Professor of the Social Sciences, published a pioneering study that employed a team of volunteers to analyze Street View images and locate signs of gentrification across 3,000 city blocks in Chicago.

Naik and co-authors took this idea a step further by using artificial intelligence to automate the process.

“By having a computer do it, we were able to really scale up the analysis, so we examined images of about 1.6 million street blocks from five cities — Boston, New York, Washington, DC, Baltimore and Detroit,” Naik said.

At the heart of the system is an artificial intelligence algorithm the collaborators “taught” to view street scenes the same way humans do.

Originally developed in work between Naik, Raskar, and Hidalgo during Naik’s graduate studies at the MIT Media Lab, the algorithm computes “Streetscore” — a score for perceived safety of streetscapes, based Street View photos and image preferences collected from thousands of online volunteers.

“We built on this algorithm to calculate Streetchange — the change in Streetscore for pairs of Street View images of the same location captured seven years apart,” Naik said. “A positive value of Streetchange is associated with new construction or upgrades, and a negative value is associated with overall decline.”

Support to prevent cyber attacks

Recognising the complexity of cyber attacks and the multi-stakeholder nature of tackling cyber security are the key components of a new data-driven cyber security system being developed by experts led by the University of Nottingham. The aim is to support organisations of all sizes in maintaining adequate levels of cyber security through a semi-automatic, regularly updated, organisation-tailored security assessment of their digital infrastructures.

The £1 million project, funded by the Engineering and Physical Sciences Research Council (EPSRC) and the National Cyber Security Centre (formerly CESG), will establish the foundations for a digital ‘Online Cyber Security System’ decision support service (OCYSS) which is designed to rapidly bring together information on system vulnerabilities and alert organisations which may be affected.

The interdisciplinary project brings together academics in different areas of cyber security, information integration and decision making from the University of Nottingham, UK and Carnegie Mellon University, USA. They will be working closely with the UK’s National Cyber Security Centre.

Dr Christian Wagner, from the School of Computer Science at the University of Nottingham, who is currently also a visiting professor at Michigan Technological University, USA, is the lead academic. He said: “While the UK has access to some of the world’s leading experts in cyber security, the scale and variety of systems in UK organisations, both public and private, make it extremely challenging to flag potential system threats in a timely fashion. This international collaborative project targets a novel approach to semi-automatically identify system vulnerabilities, thus greatly increasing the efficiency and capacity to respond to emerging threats.” Also involved as co-investigators are Prof. Garibaldi, who has previously worked with the team at CESG on modelling expert decision making, and Prof. McAuley, who is Director of the Horizon Digital Economy Hub and has specific expertise in security and privacy research.

The UK cyber security sector already has world-leading capabilities and is worth over £6 billion, employing 40,000 people. Cyber attacks are increasing in severity and sophistication and companies are struggling to recruit the expertise needed to defend their organisations.

Cyber security underpinned with scientific expertise

The system will be designed to directly address the acute shortage of availability and access to highly qualified cyber security experts by small-to-large scale organisations — from government to industry.

Dr Wagner notes: “The lack of sufficient access to highly trained and experienced cyber security experts is a key challenge for the UK. It prevents a range of users from establishing and maintaining continuously adequate levels of protection of their assets in a rapidly changing security landscape. We view this challenge as a multi-stakeholder problem because a number of human stakeholders, from users and IT managers, with varying levels of expertise, to cyber security and software providers, need to effectively communicate and work together in order to deliver systems with an appropriate level of cyber security assurance.”

This new, semi-automatic, data-driven approach is underpinned by novel research on integrating information from a number of different sources while managing discord and potential dependencies of individual components within systems. The aim is to enable systems which are capable of maximizing the utility of the available cyber security insights and to rapidly deliver user-tailored, up-to-date threat analysis and decision support to help organisations mitigate potential cyber attacks before they happen.

Wisdom of walkable communities

Using a larger dataset than for any previous human movement study, National Institutes of Health-funded researchers at Stanford University in Palo Alto, California, have tracked physical activity by population for more than 100 countries. Their research follows on a recent estimate that more than 5 million people die each year from causes associated with inactivity.

The large-scale study of daily step data from anonymous smartphone users dials in on how countries, genders, and community types fare in terms of physical activity and what results may mean for intervention efforts around physical activity and obesity. The study was published July 10, 2017, in the advance online edition of Nature.

“Big data is not just about big numbers, but also the patterns that can explain important health trends,” said Grace Peng, Ph.D., director of the National Institute of Biomedical Imaging and Bioengineering (NIBIB) program in Computational Modeling, Simulation and Analysis.

“Data science and modeling can be immensely powerful tools. They can aid in harnessing and analyzing all the personalized data that we get from our phones and wearable devices.”

Almost three quarters of adults in developed countries and half of adults in developing economies carry a smartphone. The devices are equipped with tiny accelerometers, computer chip that maintains the orientation of the screen, and can also automatically record stepping motions. The users whose data contributed to this study subscribed to the Azumio Argus app, a free application for tracking physical activity and other health behaviors.

In their study, Scott L. Delp, Ph.D., James H. Clark Professor of Bioengineering and director of the Mobilize Center at Stanford University, and colleagues analyzed 68 million days of minute-by-minute step recordings from 717,527 anonymous users of the smartphone app. Participation spanned 111 countries, but the researchers focused their study on 46 countries, each with at least 1,000 users. Of those, 90 percent of users were from 32 high income countries and 10 percent were from 14 middle income countries. The Stanford Mobilize Center is an NIH Big Data 2 Knowledge Center of Excellence.

“The study is 1,000 times larger than any previous study on human movement,” said Delp. “There have been wonderful health surveys done, but our new study provides data from more countries, many more subjects, and tracks people’s activity on an ongoing basis in their free-living environments versus a survey in which you rely on people to self-report their activity. This opens the door to new ways of doing science at a much larger scale than we have been able to do before.”

In addition to the step records, the researchers accessed age, gender, and height and weight status of users who registered the smartphone app. They used the same calculation that economists use for income inequality — called the Gini index — to calculate activity inequality by country.

“These results reveal how much of a population is activity-rich, and how much of a population is activity-poor,” Delp said. “In regions with high activity inequality there are many people who are activity poor, and activity inequality is a strong predictor of health outcomes.”

When a tumor cell invades a new tissue or organ

Cells push out tiny feelers to probe their physical surroundings, but how much can these tiny sensors really discover? A new study led by Princeton University researchers and colleagues finds that the typical cell’s environment is highly varied in the stiffness or flexibility of the surrounding tissue, and that to gain a meaningful amount of information about its surroundings, the cell must move around and change shape. The finding aids the understanding of how cells respond to mechanical cues and may help explain what happens when migrating tumor cells colonize a new organ or when immune cells participate in wound healing.

“Our study looks at how cells literally feel their way through an environment, such as muscle or bone,” said Ned Wingreen, Princeton’s Howard A. Prior Professor in the Life Sciences and professor of molecular biology and the Lewis-Sigler Institute for Integrative Genomics. “These tissues are highly disordered on the cellular scale, and the cell can only make measurements in the immediate area around it,” he said. “We wanted to model this process.” The study was published online on July 18 in the journal Nature Communications.

The organs and tissues of the body are enmeshed in a fiber-rich structure known as the extracellular matrix, which provides a scaffold for the cells to live, move and differentiate to carry out specific functions. Cells interact with this matrix by extending sticky proteins out from the cell surface to pull on nearby fibers. Previous work, mostly employing artificial flat surfaces, has shown that cells can use this tactile feedback to determine the elasticity or stiffness in a process called mechanosensing. But because the fibers of the natural matrix are all interconnected in a jumbled, three-dimensional network, it was not clear how much useful information the cell could glean from feeling its immediate surroundings.

To find out, the researchers built a computer simulation that mimicked a typical cell in a matrix made of collagen protein, which is found in skin, bones, muscles and connective tissue. The team also modeled a cell in a network of fibrin, the strong, stringy protein that makes up blood clots. To accurately capture the composition of these networks, the researchers worked with Chase Broedersz, a former Princeton Lewis-Sigler Fellow who is now professor of physics at Ludwig-Maximilians-University of Munich, and his colleagues Louise Jawerth and Stefan Münster to first create physical models of the matrices, using approaches originally developed in the group of collaborator David Weitz, a systems biologist at Harvard University. Princeton graduate student Farzan Beroz then used those models to recreate virtual versions of the collagen and fibrin networks in computer models.

With these virtual networks, Beroz, Broedersz and Wingreen could then ask the question: can cells glean useful information about the elasticity or stiffness of their environment by feeling their surroundings? If the answer is yes, then the finding would shed light on how cells can change in response to those surroundings. For example, the work might help explain how cancer cells are able to detect that they’ve arrived at an organ that has the right type of scaffold to support tumor growth, or how cells that arrive at a wound know to start secreting proteins to promote healing.

Using mathematics, the researchers calculated how the networks would deform when nearby fibers are pulled on by cells. They found that both the collagen and fibrin networks contained configurations of fibers with remarkably broad ranges of collective stiffness, from rather bendable to very rigid, and that these regions could be immediately next to each other. As a result, the cell could have two nearby probes whereby one detects hardness and the other detects softness, making it difficult for a cell to learn by mechanosensing what type of tissue it inhabits. “We were surprised to find that the cell’s environment can vary quite a lot even across a small distance,” Wingreen said.

Fitness Apps on Your Apple Watch

Activity, shown in Figure 2.22, is designed for use on an ongoing basis, throughout your day. The app takes advantage of the watch’s built-in sensors and continuously monitors your level of movement, exercise, and periods of inactivity (as long as you’re wearing the watch).

As you’ll discover, this information displays in several ways, and syncs automatically with the Activity and Health apps running on your iPhone. The Activity app also shares data with other health and fitness apps, as needed.

The Activity app can help you become more active throughout your day via three daily fitness goals, Move, Exercise, and Stand, which you set up the first time you activate the app. For example, the watch can tap your wrist using its haptic engine and display a message to remind you to stand up for at least one minute every hour (see Figure 2.23).


You use the Workout app, shown in Figure 2.24, for whenever you engage in any type of cardio fitness workout session, such as jogging, running, bike riding, or using an elliptical machine (or treadmill). The Workout app collects and displays real-time stats, such as time, distance, calories, pace, and speed.


Like in the Activity app, the data the watch collects and displays automatically syncs with the iPhone, and a growing selection of other health- and fitness-oriented apps, including the Health app that comes preinstalled with iOS 8.2 on the iPhone, can use this data.

The Health app on the iPhone collects real-time health and fitness-related data from the iPhone, Apple Watch, and other compatible equipment (such as a Bluetooth scale, for example), as well as data you manually input, and then monitors, analyzes, stores, and potentially shares that information from the one, centralized Health app.

High resolution earth system model with human dimensions

A new integrated computational climate model developed to reduce uncertainties in future climate predictions marks the first successful attempt to bridge Earth systems with energy and economic models and large-scale human impact data. The integrated Earth System Model, or iESM, is being used to explore interactions among the physical climate system, biological components of Earth system, and human systems.

By using supercomputers such as Titan, a large multidisciplinary team of scientists led by Peter Thornton of the US Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) had the power required to integrate massive codes that combine physical and biological processes in Earth system with feedbacks from human activity.

“The model we developed and applied couples biospheric feedbacks from oceans, atmosphere, and land with human activities, such as fossil fuel emissions, agriculture, and land use, which eliminates important sources of uncertainty from projected climate outcomes,” said Thornton, leader of the Terrestrial Systems Modeling group in ORNL’s Environmental Sciences Division and deputy director of ORNL’s Climate Change Science Institute.

Titan is a 27-petaflop Cray XK7 machine with a hybrid CPU-GPU architecture managed by the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL.

Through the Advanced Scientific Computing Research Leadership Computing Challenge program, Thornton’s team was awarded 85 million compute hours to improve the Accelerated Climate Modeling for Energy (ACME) effort, a project sponsored by Earth System Modeling program within DOE’s Office of Biological and Environmental Research. Currently, ACME collaborators are focused on developing an advanced climate model capable of simulating 80 years of historic and future climate variability and change in 3 weeks or less of computing effort.

Now in its third year, the project has achieved several milestones — notably the development of ACME version 1 and the successful inclusion of human factors in one of its component models, the iESM.

“What’s unique about ACME is that it’s pushing the system to a higher resolution than has been attempted before,” Thornton said. “It’s also pushing toward a more comprehensive simulation capability by including human dimensions and other advances, yielding the most detailed Earth system models to date.”

The Human Connection

To inform its Earth system models, the climate modeling community has a long history of using integrated assessment models — frameworks for describing humanity’s impact on Earth, including the source of global greenhouse gases, land use and land cover change, and other resource-related drivers of anthropogenic climate change.

Until now, researchers had not been able to directly couple large-scale human activity with an Earth system model. In fact, the novel iESM could mark a new era of complex and comprehensive modeling that reduces uncertainty by incorporating immediate feedbacks to socioeconomic variables for more consistent predictions.

The development of iESM started before the ACME initiative when a multilaboratory team aimed to add new human dimensions — such as how people affect the planet to produce and consume energy — to Earth system models. The model — now a part of the ACME human dimensions component — is being merged with ACME in preparation for ACME version 2.

Research expands basic knowledge about nonvolatile memory

A non-volatile memory keeping its digital information without power and working at the same time at the ultrahigh speed of today’s dynamic random access memory (DRAM) — that is the dream of materials scientists of TU Darmstadt.

In a recent paper just published online in the high impact journal Advanced Functional Materials, the researchers investigated why hafnium oxide based devices are so promising for memory applications and how the material can be tuned to perform at the desired level. This knowledge could be the base for future mass application in all kind of electronic devices.

This novel kind of non-volatile memory saves information by changing the electrical resistance of a metal-insulator-metal structure. The high respectively low resistive states represent zero and one and do not vanish even when the computer is turned off. The main principle of this resistive random access memory (RRAM) has been known for several years, but researchers and developers are still fighting to bring it into real live applications.

Memory based on hafnium oxide is particularly interesting due to its superior properties. However, the devices still cannot be fabricated with low variability and low spread of electronic properties as required for large scale production. Furthermore, the switching behavior is complex and still has not been fully understood.

Oxygen vacancies

The researchers of TU Darmstadt are following a recipe which has been extremely successful in semiconductor device technology: They focus on the defects in the material. “Up to now, it was not entirely clear which physical and chemical material properties govern the resistive switching process,” says Prof. Dr. Lambert Alff, head of the Advanced Thin Film Technology group in the Materials Science department of TU Darmstadt. His team focused their research on the role of oxygen defects in the functional material.

Using molecular beam epitaxy, a well-known technique from semiconductor technology, the group was able to produce RRAM structures where only the oxygen concentration was varied while all the rest of the device was identical. “By changing the oxygen defect concentration in hafnium oxide we could unambiguously correlate the state of material with the resistive switching behavior of the memory device,” explains Sankaramangalam Ulhas Sharath, PhD student in the group and first author of the publication. Based on these results the researchers developed a unified model connecting all so far reported switching states to the behavior of oxygen vacancies. Another exciting consequence of their work is the discovery that quantized conductance states can be stabilized at room temperature when controlling the oxygen vacancies paving the way for novel quantum technology.

Will RRAM be the replacement for Flash memory?

The improved understanding of the role of oxygen vacancies might be the key to produce RRAM cells with reproducible properties on a larger scale. Due to its inherent physical limitations it is expected that within the next few years the current prevailing flash technology will be replaced by another non-volatile memory technology. It could be RRAM that will be satisfying the ever growing hunger for more energy efficient and ubiquitous memory in cars, mobiles, fridges etc. It even might be particularly suited for neuromorphic circuits mimicking the functionality of the human brain — a visionary concept.

Making formulas easier to build and read by taking advantage of range names.

A worksheet is merely a lifeless collection of numbers and text until you define some kind of relationship among the various entries. You do this by creating formulas that perform calculations and produce results. This chapter takes you through some formula basics, including constructing simple arithmetic and text formulas, understanding the all-important topic of operator precedence, copying and moving worksheet formulas, and making formulas easier to build and read by taking advantage of range names.


Understanding Formula Basics

Most worksheets are created to provide answers to specific questions: What is the company’s profit? Are expenses over or under budget, and by how much? What is the future value of an investment? How big will an employee’s bonus be this year? You can answer these questions, and an infinite number of others, by using Excel formulas.

All Excel formulas have the same general structure: an equal sign (=) followed by one or more operands, which can be values, cell references, ranges, range names, or function names, separated by one or more operators, which are symbols that combine the operands in some way, such as the plus sign (+) and the greater-than sign (>).

It’s a good idea to know the limits Excel sets on various aspects of formulas and worksheet models, even though it’s unlikely that you’ll ever bump up against these limits. Formula limits that were expanded in Excel 2007 remain the same in Excel 2016. So, in the unlikely event that you’re coming to Excel 2016 from Excel 2003 or earlier, Table 3.1 shows you the updated limits.


Entering and Editing Formulas

Entering a new formula into a worksheet appears to be a straightforward process:

  1. Select the cell in which you want to enter the formula.
  2. Type an equal sign (=) to tell Excel that you’re entering a formula.
  3. Type the formula’s operands and operators.
  4. Press Enter to confirm the formula.

However, Excel has three different input modes that determine how it interprets certain keystrokes and mouse actions:

  • When you type the equal sign to begin the formula, Excel goes into Enter mode, which is the mode you use to enter text (such as the formula’s operands and operators).
  • If you press any keyboard navigation key (such as Page Up, Page Down, or any arrow key), or if you click any other cell in the worksheet, Excel enters Point mode. This is the mode you use to select a cell or range as a formula operand. When you’re in Point mode, you can use any of the standard range-selection techniques. Note that Excel returns to Enter mode as soon as you type an operator or any character.
  • If you press F2, Excel enters Edit mode, which is the mode you use to make changes to the formula. For example, when you’re in Edit mode, you can use the left and right arrow keys to move the cursor to another part of the formula for deleting or inserting characters. You can also enter Edit mode by clicking anywhere within the formula. Press F2 to return to Enter mode.

Hear and feel animated characters

Sit on Disney Research’s Magic Bench and you may have an elephant hand you a glowing orb. Or you might get rained on. Or a tiny donkey might saunter by and kick the bench.

It’s a combined augmented and mixed reality experience, but not the type that involves wearing a head-mounted display or using a handheld device. Instead, the surroundings are instrumented rather than the individual, allowing people to share the magical experience as a group.

People seated on the Magic Bench can see themselves in a mirrored image on a large screen in front of them, creating a third person point of view. The scene is reconstructed using a depth sensor, allowing the participants to actually occupy the same 3D space as a computer-generated character or object, rather than superimposing one video feed onto another.

“This platform creates a multi-sensory immersive experience in which a group can interact directly with an animated character,” said Moshe Mahler, principal digital artist at Disney Research. “Our mantra for this project was: hear a character coming, see them enter the space, and feel them sit next to you.”

The research team will present and demonstrate the Magic Bench at SIGGRAPH 2017, the Computer Graphics and Interactive Techniques Conference, beginning July 30 in Los Angeles.

The researchers used a color camera and depth sensor to create a real-time, HD-video-textured 3D reconstruction of the bench, surroundings, and participants. The algorithm reconstructs the scene, aligning the RGB camera information with the depth sensor information.

To eliminate depth shadows that occur in areas where the depth sensor has no corresponding line of sight with the color camera, a modified algorithm creates a 2D backdrop. The 3D and 2D reconstructions are positioned in virtual space and populated with 3D characters and effects in such a way that the resulting real-time rendering is a seamless composite, fully capable of interacting with virtual physics, light, and shadows.