Page 3,316«..1020..3,3153,3163,3173,318..3,3303,340..»

A Measured Approach to Regulating Fast-Changing Tech – Harvard Business Review

Executive Summary

Innovations driving what many refer to as the Fourth Industrial Revolution are as varied as the enterprises affected. Industries and their supply chains are already being revolutionized by several emerging technologies, including 5G networks, artificial intelligence, and advanced robotics, all of which make possible new products and services that are both better and cheaper than current offerings. Unfortunately, not every application of transformational technology is as obviously beneficial to individuals or society as a whole. But rather than panic, regulators will need to step back, and balance costs and benefits rationally.

Amid the economic upheaval caused by Covid-19, technology-driven disruption continues to transform nearly every business at an accelerating pace, from entertainment to shopping to how we work and go to school. Though the crisis may be temporary, many changes in consumer behavior are likely permanent.

Well before the pandemic, however, industries and their supply chains were already being revolutionized by several emerging technologies, including 5G networks, artificial intelligence, and advanced robotics, all of which make possible new products and services that are both better and cheaper than current offerings. That kind of big bang disruption can quickly and repeatedly rewrite the rules of engagement for incumbents and new entrants alike. But is the world changing too fast? And, if so, are governments capable of regulating the pace and trajectory of disruption?

The answers to those questions vary by industry, of course. Thats because the innovations driving what many refer to as the Fourth Industrial Revolution are as varied as the enterprises affected. In my recent book, Pivot to the Future, my co-authors and I identified ten transformative technologies with the greatest potential to generate new value for consumers, which is the only measure of progress that really matters. They are: extended reality, cloud computing, 3D printing, advanced human-computer interactions, quantum computing, edge and fog computing, artificial intelligence, the Internet of Things, blockchain, and smart robotics.

Some of these disruptors, such as blockchain, robotics, 3D printing and the Internet of things, are already in early commercial use. For others, the potential applications may be even more compelling, though the business cases for reaching them are less obvious. Today, for example, only the least risk-adverse investors are funding development in virtual reality, edge computing, and new user interface technologies that interpret and respond to brainwaves.

Complicating both investment and adoption of transformative technologies is the fact that the applications with the biggest potential to change the world will almost certainly be built on unanticipated combinations of several novel and mature innovations. Think of the way ride-sharing services require existing GPS services, mobile networks, and devices, or how video conferencing relies on home broadband networks and high-definition displays. Looking at just a few of the most exciting examples of things to come make clear just how unusual the next generation of disruptive combinations will be, and how widespread their potential impact on business-as-usual:

Unfortunately, not every application of transformational technology is as obviously beneficial to individuals or society as a whole. Every one of the emerging technologies we identified (and plenty of those already in mainstream use) come with potential negative side effects that may, in some cases, outweigh the benefits. Often, these costs are both hard to predict and difficult to measure.

As disruption accelerates, so too does anxiety about its unintended consequences, feeding what futurist Alvin Toffler first referred to half a century ago as Future Shock. Tech boosters and critics alike are increasingly appealing to governments to intervene, both to promote the most promising innovations and, at the same time, to solve messy social and political conflicts aggravated by the technology revolution.

On the plus side, governments continue to support research and development of emerging technologies, serving as trial users of the most novel applications. The White House, for example, recently committed over $1 billion for continued exploration of leading-edge innovation in artificial intelligence and quantum computing. The Federal Communications Commission has just concluded one its most successful auctions yet for mobile radio frequencies, clearing bandwidth once considered useless for commercial use but now seen as central to nationwide 5G deployments. Palantir, a data analytics company that works closely with governments to assess terrorism and other complex risks, has just filed for a public offering that values the start-up at over $40 billion.

At the same time, a regulatory backlash against technology continues to gain momentum, with concerns about surveillance, the digital divide, privacy, and disinformation leading lawmakers to consider restricting or even banning some of the most popular applications. And the increasingly strategic importance of continued innovation to global competitiveness and national security has fueled increasingly nasty trade disputes, including some between the U.S., China, and the European Union.

Together with on-going antitrust inquiries into the competitive behavior of leading technology providers, these negative reactions underscore what author Adam Thierer sees as the growing prevalence of techno-panics generalized fears about personal autonomy, the fate of democratic government, and perhaps even apocalyptic outcomes from letting some emerging technologies run free.

Disruptive innovation is not a panacea, but nor is it a poison. As technology transforms more industries and becomes the dominant driver of the global economy, it is inevitable both that users will grow more ambivalent, and, as a result, that regulators will become more involved. If, as a popular metaphor of the 1990s had it, the digital economy began as a lawless frontier akin to the American West, its no surprise that as settlements grow socially complex and economically powerful, the law will continue to play catch up, likely for better and for worse.

But rather than panic, regulators need to step back, and balance costs and benefits rationally. Thats the only way well achieve the exciting promise of todays transformational technologies, but still avoid the dystopias.

Go here to see the original:
A Measured Approach to Regulating Fast-Changing Tech - Harvard Business Review

Read More..

How We Got Trump Voters to Change Their Mind – The Atlantic

Sarah Longwell: Why people who hate Trump stick with him

Typically, when volunteers engage in a canvassing campaign, the effort basically amounts to verbal leafleting. They make a one- to two-minute targeted pitch for a candidate or a ballot initiative, and then they leave or hang up the phone.

In a deep canvass, we want to have a real conversation. To get people to open up, we start by asking the basics: How are you doing? How are you holding up in this global pandemic? We respond not with canned answers, but with more questions: Oh, youre watching football? Who is your team? How is your family doing? Were really asking, and we really listen. Eventually, a true back-and-forth begins, one where we exchange stories about our lives and what is at stake for ourselves and for our communities in this election. Usually, by the end, what emerges is some kind of internal conflictwhy the person is frustrated, why she cant decide who to vote for, or why she is skeptical of Biden.

Recently, one of our volunteers, Angela, reached a man by phone while he was at work on a construction site (during the pandemic, weve switched from door-knocking to phone-banking). When Angela asked how he was doing, he initially said he was fine, but when Angela shared how much shes been struggling and how worried shes been about the pandemic, the conversation changed. Angela said that her husbands grandmother had died in a nursing homealong with 50 other peopleand he opened up about his wife coming down with COVID-19 and about the time that she called him at work to say she was struggling to breathe. This led to a conversation about health care and the need for good leadership. At the beginning of the call, he said he had no plans to vote but was ready to cast a ballot when he hung up, and Angela ended the call feeling a depth of connection.

Research has shown time and again that people vote from an emotional place. Its not so much that facts dont matter. Its that facts and talking points do not change minds. And arguing opinions at the start of a conversation about politics causes the interview subject to keep his defensive, partisan walls up and prevents him from connecting with the canvasser.

We don't try to directly persuade people to change their minds on a candidate or an issue. Rather, we create intimacy, in the faith that people have an ability to reexamine their politics, and their long-term worldview, if given the right context. Weve found that when people start to see the dissonance between what they believe and what they actually want, their views changemany of them come around to a more progressive perspective. For example, if a woman says she believes that immigrants are the main problem in our society, but reveals that her top personal concern is health care, then we talk about whether immigrants have anything to do with that worry. When a man says he wants to feel safe, we ask questions about what, in particular, makes him feel unsafe. If he answers COVID-19, then we talk about which candidate might be better suited to handle the pandemic.

Read the rest here:
How We Got Trump Voters to Change Their Mind - The Atlantic

Read More..

Researchers Look To Animals To Give Reinforcement Learning Systems Common Sense – Unite.AI

AI researchers from institutes like Imperial College London, University of Cambridge, and Google DeepMind are looking to animals for inspiration on how to improve the performance of reinforcement learning systems. In a joint paper published in CellPress Reviews, entitled Artificial Intelligence and the Common Sense of Animals, the researchers argue that animal cognition provides useful benchmarks and methods of evaluation for reinforcement learning agents and it can also inform the engineering of tasks and environments.

AI researchers and engineers have long looked to biological neural networks for inspiration when designing algorithms, using principals from behavioral science and neuroscience to inform the structure of algorithms. Yet most of the cues AI researchers take from the neuroscience/behavior science fields are based on humans, with the cognition of young children and infants serving as the focal point. AI researchers have yet to take much inspiration from animal models, but animal cognition is an untapped resource that has the potential to lead to important breakthroughs in the reinforcement learning space.

Deep reinforcement learning systems are trained through a process of trial and error, reinforced with rewards whenever a reinforcement learning agent gets closer to completing a desired objective. This is very similar to teaching an animal to carry out a desired task by using food as a reward. Biologists and animal cognition specialists have carried out many experiments assessing the cognitive abilities of a variety of different animals, including dogs, bears, squirrels, pigs, crows, dolphins, cats, mice, elephants, and octopuses. Many animals exhibit impressive displays of intelligence, and some animals like elephants and dolphins may even have a theory of mind.

Looking at the body of research done regarding animal cognition might inspire AI researchers to consider problems from different angles. As deep reinforcement learning has become more powerful and sophisticated, AI researchers specializing in the field are seeking out new ways of testing the cognitive capabilities of reinforcement learning agents. In the research paper, the research team makes reference to the types of experiments carried out with primates and birds, mentioning that they aim to design systems capable of accomplishing similar types of tasks, giving an AI a type of common sense. According to the authors of the paper, they advocate an approach wherein RL agents, perhaps with as-yet-undeveloped architectures, acquire what is needed through extended interaction with rich virtual environments.

As reported by VentureBeat, the AI researchers argue that common sense isnt a trait unique to humans and that it is dependent upon an understanding of basic properties of the physical world, such as how an object occupies a point and space, what constraints there are on that objects movements, and an appreciation for cause and effect. Animals display these traits in laboratory studies. For instance, crows understand that objects are permanent things, as they are able to retrieve seeds even when the seed is hidden from them, covered up by another object.

In order to endow a reinforcement learning system with these properties, the researchers argue that they will need to create tasks that, when paired with the right architecture, will create agents capable of transferring learned principles to other tasks. The researchers argue that training for such a model should involve techniques that require an agent to gain understanding of a concept after being exposed to only a few examples, called few-shot training. This is in contrast to the traditional hundreds or thousands of trials that typically goes into the trial and error training of an RL agent.

The research team goes on to explain that while some modern RL agents can learn to solve multiple tasks, some of which require the basic transfer of learned principles, it isnt clear that RL agents could learn a concept as abstract at common sense. If there was an agent potentially capable of learning such a concept, they would need tests capable of ascertaining if an RL agent understood the concept of a container.

DeepMind in particular is excited to engage with new and different ways of developing and testing reinforcement learning agents. Recently, at the Stanford HAI conference that took place earlier in October, DeepMinds head of neuroscience research, Matthew Botvinick, urged machine learning researchers and engineers to collaborate more in other fields of science. Botvinick highlighted the importance of interdisciplinary work with psychologists and neuroscience for the AI field in a talk called Triangulating Intelligence: Melding Neuroscience, Psychology, and AI.

Go here to see the original:
Researchers Look To Animals To Give Reinforcement Learning Systems Common Sense - Unite.AI

Read More..

The true dangers of AI are closer than we think – MIT Technology Review

William Isaac is a senior research scientist on the ethics and society team at DeepMind, an AI startup that Google acquired in 2014. He also cochairs the Fairness, Accountability, and Transparency conferencethe premier annual gathering of AI experts, social scientists, and lawyers working in this area. I asked him about the current and potential challenges facing AI developmentas well as the solutions.

A: I want to shift the question. The threats overlap, whether its predictive policing and risk assessment in the near term, or more scaled and advanced systems in the longer term. Many of these issues also have a basis in history. So potential risks and ways to approach them are not as abstract as we think.

There are three areas that I want to flag. Probably the most pressing one is this question about value alignment: how do you actually design a system that can understand and implement the various forms of preferences and values of a population? In the past few years weve seen attempts by policymakers, industry, and others to try to embed values into technical systems at scalein areas like predictive policing, risk assessments, hiring, etc. Its clear that they exhibit some form of bias that reflects society. The ideal system would balance out all the needs of many stakeholders and many people in the population. But how does society reconcile their own history with aspiration? Were still struggling with the answers, and that question is going to get exponentially more complicated. Getting that problem right is not just something for the future, but for the here and now.

The second one would be achieving demonstrable social benefit. Up to this point there are still few pieces of empirical evidence that validate that AI technologies will achieve the broad-based social benefit that we aspire to.

Lastly, I think the biggest one that anyone who works in the space is concerned about is: what are the robust mechanisms of oversight and accountability.

A: Three areas would go a long way. The first is to build a collective muscle for responsible innovation and oversight. Make sure youre thinking about where the forms of misalignment or bias or harm exist. Make sure you develop good processes for how you ensure that all groups are engaged in the process of technological design. Groups that have been historically marginalized are often not the ones that get their needs met. So how we design processes to actually do that is important.

The second one is accelerating the development of the sociotechnical tools to actually do this work. We dont have a whole lot of tools.

The last one is providing more funding and training for researchers and practitionersparticularly researchers and practitioners of colorto conduct this work. Not just in machine learning, but also in STS [science, technology, and society] and the social sciences. We want to not just have a few individuals but a community of researchers to really understand the range of potential harms that AI systems pose, and how to successfully mitigate them.

A: In 2016, I remember, the White House had just come out with a big data report, and there was a strong sense of optimism that we could use data and machine learning to solve some intractable social problems. Simultaneously, there were researchers in the academic community who had been flagging in a very abstract sense: Hey, there are some potential harms that could be done through these systems. But they largely had not interacted at all. They existed in unique silos.

Since then, weve just had a lot more research targeting this intersection between known flaws within machine-learning systems and their application to society. And once people began to see that interplay, they realized: Okay, this is not just a hypothetical risk. It is a real threat. So if you view the field in phases, phase one was very much highlighting and surfacing that these concerns are real. The second phase now is beginning to grapple with broader systemic questions.

A: I am. The past few years have given me a lot of hope. Look at facial recognition as an example. There was the great work by Joy Buolamwini, Timnit Gebru, and Deb Raji in surfacing intersectional disparities in accuracies across facial recognition systems [i.e., showing these systems were far less accurate on Black female faces than white male ones]. Theres the advocacy that happened in civil society to mount a rigorous defense of human rights against misapplication of facial recognition. And also the great work that policymakers, regulators, and community groups from the grassroots up were doing to communicate exactly what facial recognition systems were and what potential risks they posed, and to demand clarity on what the benefits to society would be. Thats a model of how we could imagine engaging with other advances in AI.

But the challenge with facial recognition is we had to adjudicate these ethical and values questions while we were publicly deploying the technology. In the future, I hope that some of these conversations happen before the potential harms emerge.

A: It could be a great equalizer. Like if you had AI teachers or tutors that could be available to students and communities where access to education and resources is very limited, thatd be very empowering. And thats a nontrivial thing to want from this technology. How do you know its empowering? How do you know its socially beneficial?

I went to graduate school in Michigan during the Flint water crisis. When the initial incidences of lead pipes emerged, the records they had for where the piping systems were located were on index cards at the bottom of an administrative building. The lack of access to technologies had put them at a significant disadvantage. It means the people who grew up in those communities, over 50% of whom are African-American, grew up in an environment where they dont get basic services and resources.

So the question is: If done appropriately, could these technologies improve their standard of living? Machine learning was able to identify and predict where the lead pipes were, so it reduced the actual repair costs for the city. But that was a huge undertaking, and it was rare. And as we know, Flint still hasnt gotten all the pipes removed, so there are political and social challenges as wellmachine learning will not solve all of them. But the hope is we develop tools that empower these communities and provide meaningful change in their lives. Thats what I think about when we talk about what were building. Thats what I want to see.

See the original post:
The true dangers of AI are closer than we think - MIT Technology Review

Read More..

An Old Dog’s Tale: Wild visions fill the mind at election time – Chinook Observer

And now its election time, and its very exciting. Let me tell you about it.

Pictures of candidates are tied to little parachute falling from the sky. Some guys going down the street riding an elephant with a great big speaker dangling from its trunk. Three beauty queens are riding donkeys and singing theme songs.

Maybe this is the kind of small-town madness you see in those sci-fi movies you see at 2 oclock in the morning.

I have to do something. I pick up the phone and dial 9-1-1. Help me, please! I cry (I think I got the receiver wet, Im a very nervous guy). Politicians have swarmed right outside my window. Im breaking down, my voice barely above a whisper. Im all alone Im all alone Im afraid.

This kind of thing has been going on for three days now. Since then Ive seen gangs of candidates with bow ties and big grinning smiles pushing against my bedroom window. Theyre all hollering at me, but I can only hear a few of them. More more for us! (Damn Republicans). Less, less for them! (Damn Democrats).

Theres a candidate on the roof, peering down over the side. His face is all dirty. Your chimneys as clean as a whistle, he says cheerily. Hes holding a couple of dead crows by their feet. I did that vote for me.

Im out of my head. I stumble to the window and open it; a dozen guys fall backward into a big pile. I got a fever, I said. I need popsicles whos got popsicles?

I close the curtain and change out of my cowboy pajamas. In the kitchen theres a guy running for governor, a guy youve seen maybe a hundred times on the TV. Hes scrubbing out my sink with his hairpiece.

Right away they shoot out toward the driveway. A dozen young men in black suits and dark sunglasses run forward with popsicles.

Root beer, I command. Root beer and banana.

One candidate pushes through Root beer, he says triumphantly.

No banana? I ask.

I grab the root beer popsicle and close the window. The popsicle guy slinks back into the crowd. He trips over a troop of marching Cub Scouts.

I close the curtain and change out of my cowboy pajamas. In the kitchen theres a guy running for governor, a guy youve seen maybe a hundred times on the TV. Hes scrubbing out my sink with his hairpiece.

Can you believe it?

I need to get somewhere, anywhere, but theyre fighting on my porch, two guys dressed like Lewis and Clark have each other in a headlock. Theyre like carnie workers, one of thems got a bunch of plastic donkeys, the others carrying around stuffed elephants. A lady in sequins is out by the fence blowing on a kazoo and smashing cymbals together. Some girl in silver leotards is standing on her hands on the roof of my pickup. She lights a baton on fire and sticks it in her mouth.

Get outta here! Im getting bold. These guys are running down my property value.

Me! Me! Vote for me! Theres a guy jumping up and down on my garbage can. The lid shatters; hes standing waist deep in broken bags of funky cat litter. Some other guy knocks him over and rolls my garbage can into a ditch. Hes screaming at him. Why, you independent!

I bolted past them and ran for my car. Out on the road I saw rows of dancing girls with big leg kicks grizzly bears spinning saucers, clowns with guns that shot confetti. (I swear I saw the ghost of Lawrence Welk on my neighbors roof, but Im not sure I was sick after all.)

Bands of roving candidates were running up and down the street like trick-or-treaters at Halloween. I rolled my window down for air. A lady candidate dressed up like Uncle Sam was walking on stilts that lit up with flashing stars whenever they touched the ground. Hungry she muttered. So hungry. She wrapped her arms around her stomach. Must have votes.

She stopped on the shoulder of the road and fell over, flat on her face. It looks like she cant move. (She finally gets picked up by a cowboy band playing Merle Haggard songs on the back of a flatbed truck.)

It was all very exciting, in a weird sort of way. But what would happen when the election was over? Where would all these candidates go?

I could only imagine the worst: Candidates standing forlornly at freeway off ramps, looking for a quarter and a crust of bread. Candidate soup kitchens (a couple of votes for a tuna fish sandwich, huh mister?). Candidates huddled around campfires along the beach, deep in the woods. Candidate hitchhikers. Candidate old folks homes?

My friend Delores told me one night she caught a candidate with wild, crazy eyes in his headlights, biting into a seagull. (She has since proposed setting up candidate-feeding stations.)

We need to create a host of imaginary government jobs to get these candidates off the street. We have to be on the lookout for signs of candidate addiction in our children.

Its election time again. Be vigilant.

Originally posted here:
An Old Dog's Tale: Wild visions fill the mind at election time - Chinook Observer

Read More..

Mind the Gap – The Indian Express

Updated: October 25, 2020 6:51:24 pm

By Sunny Jose and Bheemeshwar Reddy A

The maiden time-use survey in India, carried out in six states in 1998-99, stated the obvious but ignored fact. We were terribly generous in confining women to primarily managing kitchens and providing unpaid care work. The proportion of men participating in, and the time spent on, unpaid domestic maintenance and care work was quite low. This revealed, inter alia, our deep allegiance to patriarchal norms, besides the burden and un-freedom women had to endure. Though this, in itself, may not be amazing, whether we have preserved it or knocked off will be interesting to see.

The latest time-use survey, carried out in all the states of India in 2019 by the National Statistical Office, comes after a gap of two decades. The 2019 time-use survey confirms the persistence of the gendered time-use patterns of the past even today. It informs that only about 26 per cent of men (six years and above) participate in either domestic maintenance or care work and a measly four per cent participate in both activities. The corresponding proportions for women are 81 per cent and 28 per cent. A closer look reveals disquieting facts. Only six per cent of men, as against 75 per cent of women, participate in food and meals management and preparation. While 13 per cent of men participate in childcare and instruction, the proportion is paltry in caring the dependent adults at home.

This huge gender gap also emerges in time spent by the participating men and women. Men spend, on average, 97 and 76 minutes, as against 299 and 134 minutes by women, in unpaid domestic and caregiving services for household members, respectively. Thus, over 80 per cent of women in India spend about five hours daily in unpaid domestic services for household members. Conversely, men spend longer time in employment related activities (459 minutes) than women (333 minutes). However, the gender gap in time spent is larger in domestic maintenance than employment related activities. Though time spent by men in domestic maintenance constitutes only 32 per cent of time spent by women, the time spent by women in employment goes up to 73 per cent of time spent by men. Interestingly, there is no perceptible gender gap in time allocation in other activities, including rest, personal care and socialisation. These broad patterns emerge in both rural and urban India.

Since education is likely to weaken mens attachment to social norms, does education help increase mens participation in unpaid domestic maintenance and care work? Do all social group exhibit the similar patterns? Surprisingly, neither the participation nor the average time spent on unpaid care and domestic activities increases substantially even if mens education goes up. Also, the same pattern appears among men from all social groups in India. These broad patterns prevail in almost all the Indian states. The notable exceptions are Assam, Arunachal Pradesh and Goa where around 50 per cent of men participate in either domestic maintenance or care work. With less than 20 per cent of mens participation, Haryana, Rajasthan and Himachal Pradesh remain at the other end. However, average time spent by men in unpaid domestic services varies substantially across the states.

Why does lower proportion of men participate in, and spend lesser time on, domestic maintenance and care work? An oft-invoked explanation is that social norms continue to condition women and men to have differential time allocation priorities and possibilities. If so, is the low participation of men and the lesser time spent by them on unpaid domestic maintenance and care work essentially due to the time poverty imposed by their long hours of employment? Or, alternatively, is it because of their adherence to social norms?

A scrutiny of time allocation patterns of older men (above 60 years) is relevant here, as time squeeze due to employment is likely to be less intense among them. Only one-third of older men as against 78 per cent of older women participate in domestic maintenance work in rural India. They spend, on average, about 112 and 245 minutes, respectively, in these activities. This pattern also appears in urban India. The lack of significant increase in mens participation and time spent on unpaid domestic and care activities despite the possible contraction in employment-induced time squeeze points to the likely influence of social norms. Whats more, the data reveals that we are schooling the younger generations (6-14 and 15-29 years) to normalise and practice such gendered patterns.

The 2019 time-use survey confirms that we continue to confine women to primarily shouldering a huge, debilitating burden of unpaid household maintenance and care work. Why do we conform to and perform the gendered time-use patterns of the past even now? Why is the educated, tech-savvy younger generation in harmony with the older generation in fostering the regressive gendered norms? It is important that we seriously take note of and deliberate on the peculiar tenacity of the archaic, gendered constructs and divisions even today and their deep but differential impacts on women and men.

Jose is RBI chair professor at Council for Social Development, Hyderabad. Reddy A is assistant professor at Department of Economics and Finance, Birla Institute of Technology & Science Pilani, Hyderabad campus. Views are personal.

The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Opinion News, download Indian Express App.

The Indian Express (P) Ltd

Here is the original post:
Mind the Gap - The Indian Express

Read More..

Breaking News – HBO’s "Crazy, Not Insane," A Provocative Look at the Minds of Serial Killers, Debuts November 18 – The Futon Critic

HBO's "CRAZY, NOT INSANE," A Provocative Look At The Minds Of Serial Killers, Debuts November 18

From Academy Award(R)-Winning Director Alex Gibney

Psychiatrist Dr. Dorothy Otnow Lewis has dedicated her career to the study of murderers, seeking answers to the question of why we kill. CRAZY, NOT INSANE, directed and produced by Academy Award(R)-winner Alex Gibney (HBO's "Agents of Chaos" and "The Inventor: Out For Blood in Silicon Valley") debuts WEDNESDAY, NOVEMBER 18 (9:00-11:00 p.m. ET/PT). This provocative documentary explores, like a scientific detective story, Dr. Lewis's lifelong attempts to look beyond the grisly details of homicides into the hearts and minds of the killers themselves.

The film will be available on HBO and to stream on HBO Max.

An official selection of the 2020 Venice International Film Festival, the documentary profiles Dr. Lewis and her research, includes videotaped death row interviews, and examines the formative experiences and neurological dysfunction of such infamous murderers as Arthur Shawcross and Ted Bundy, challenging the very notion of evil and proposing that murderers are made not born.

A well-respected psychiatrist and author, Dr. Lewis began her career working with children, including violent juvenile offenders. Her exposure to the testimony of childhood physical and sexual abuse led her to explore the way that trauma in childhood - often coupled with some neurological damage - can sow the seeds of murderous impulses in adults.

Those insights led her to become an expert in dissociative identity disorder (formerly known as multiple personality disorder) as she observed first-hand the way in which the killers she examined would switch between alternate personalities - or "alters" as she calls them - in the course of her examinations. While Dr. Lewis's conclusions were often dismissed by others, including by well-known forensic psychiatrist Park Dietz, her videotapes of her death row interviews show meaningful transformations between "alters" developed in childhood, often as a way to endure and sometimes avenge, the pain they suffered.

Among Dr. Lewis's most well-known cases is Arthur Shawcross, who was convicted in 1991 for the murder of eleven women. While Lewis's videotaped exchanges with Shawcross show him inhabiting the alters of his vengeful mother and a 13th century cannibal, Shawcross was found sane and guilty in a trial by jury. Lewis was also one of the last people to interview Ted Bundy just before his execution. In an audiotape featured in the film, Bundy was unusually candid with the psychiatrist, revealing new details that upend the conventional wisdom about him. One of Lewis's regrets is that she was never able to examine Bundy's brain for clues to what made one of the world's most infamous serial killers.

CRAZY, NOT INSANE is a stylistic departure for Gibney, who uses an eclectic mix of cinema verit, videotapes of psychiatric evaluations, hand-drawn animation and home movies in order to explore the complexities of the human mind. Lewis's literary voice is read by actress Laura Dern (HBO's "Big Little Lies" and "The Tale") to bring further insight into Lewis' career and her cases through her writings. From images of Dr. Lewis scribbling on legal pads in her unruly living room to studio art classes she takes in life-drawing, Gibney's portrait of Lewis intends to show a woman of limitless curiosity willing to explore places others are unwilling to go.

In addition to Shawcross and Bundy, other high-profile convicted murderers and death row inmates assessed by Dr. Lewis include Mark David Chapman, David Wilson, Marie Moore and Joseph Paul Franklin. The film also includes a videotape of Dr. Lewis's interview with "traveling executioner" Sam Jones, an electrician who also administrated hundreds of death penalty sentences. While claiming he "zapped" convicted killers in the electric chair without regret, Jones displays a collection of disturbing paintings he made after every execution, which reveal his inner turmoil.

The title, CRAZY, NOT INSANE, refers, in a colloquial way, to the conflict that the legal system - framed by demands for justice that can devolve into a desire for revenge - has with the world of medical science in defining grave mental illness. For many years, Dorothy testified in death penalty cases about whether convicted murderers were sane enough to be executed. Her insights and forensic skills helped change the laws and the way that death penalty lawyers approach their clients' cases.

The film also explores the death penalty itself, highlighting the research that indicates that states with the death penalty tend to have higher murder rates than those without, questioning the theory of the death penalty as a deterrent to violence. The film asks an important question: once dangerous killers are locked away and the public is protected, why is society so determined to execute these human beings?

HBO Documentary Films presents a Jigsaw Production CRAZY, NOT INSANE directed by Alex Gibney; produced by Alex Gibney and Ophelia Harutyunyan, Erin Edeiken and Joey Marra; executive produced by Stacey Offman, Richard Perello and Maiken Baird; For HBO: executive producers, Nancy Abraham, Lisa Heller.

CRAZY, NOT INSANE kicks off a collection of five enthralling crime-focused documentary films that premiere on Wednesdays beginning November 18. Each title goes beyond the sensational headlines to explore the human toll on all sides of a crime and delves deep into the internal and external worlds of perpetrators, victims, and survivors. The anthology includes:

THE MYSTERY OF DB COOPER (November 25), directed by John Dower, explores the only unsolved airplane hijacking in U.S. history, which continues to inspire wild speculation about the identity of the hijacker almost fifty years later. This investigative film explores the many different theories surrounding the case, bringing to life the stories of four people believed by their family and friends to be "DB Cooper," the man who hijacked a 727, jumped from the plane over Washington State with a parachute and $200,000, never to be heard from again.

BABY GOD (December 2), directed by Hannah Olson and executive produced by Academy Award nominees Heidi Ewing and Rachel Grady, is a shocking examination into Las Vegas fertility specialist, Dr. Quincy Fortier, who assisted hundreds of couples struggling with conceiving. Decades later, many children born from his interventions discover through DNA and genealogical websites, that Dr. Fortier had used his own sperm to impregnate their mothers without their knowledge or consent, provoking a troubling reckoning with the nature of genetic inheritance, the meaning of family and the dilemma of revealing painful secrets. An official selection of SXSW 2020.

ALABAMA SNAKE (December 9), produced and directed by Theo Love, produced by Bryan Storkel and written by Theo Love and Bryan Storkel, explores the story of Glenn Summerford, a Pentecostal minister, accused of attempting to murder his wife with a rattlesnake in the sleepy town of Scottsboro, Alabama. The details of the investigation and the trial that followed has haunted his family and community for decades. The documentary features local Appalachian historian and folklorist, Dr. Thomas Burton, who has spent his life studying the culture, beliefs, and folklore of Pentecostal snake handlers, painting a Southern Gothic portrait of Glenn Summerford and his tale of demon possession.

THE ART OF POLITICAL MURDER (December 16), is directed by Paul Taylor, produced by Teddy Leifer and Regina K. Scully and executive produced by Academy Award winners George Clooney and Grant Heslov and is based on Francisco Goldman's award-winning book of the same name. The film tells the story of the 1998 murder of Guatemalan human rights activist Bishop Juan Gerardi, which stunned a country ravaged by decades of political violence. The documentary highlights the team of young investigators who take on the case and begin to unearth a web of conspiracy and corruption, entangling the highest levels of government in their pursuit of the truth.

See the original post here:
Breaking News - HBO's "Crazy, Not Insane," A Provocative Look at the Minds of Serial Killers, Debuts November 18 - The Futon Critic

Read More..

Algorithmic bias – how do we tackle the underlying problem that inhibits the full potential of AI? – Diginomica

(Pixabay)

The topic of bias in AI is one thats had a lot of airtime at diginomica and beyond. Bad practice involving facial recognition tech or automated candidate selection in recruitment are among the best-known examples, while this summer in the UK, pupils waiting for their vital A-Level exam results found themselves on the wrong end of an algorithm that marked them down, ruining university entry chances in many cases.

The recent (excellent) BlackTechFest conference took on some of the questions around bias in AI in a lively panel discussion that inevitably left more questions than answers in its wake, but provided food for thought. Opening the debate, Dr Djamila Amimer, CEO of AI management consultancy Mind Senses Global, began by attempting a definition of algorithmic bias:

Algorithmic bias is defined as systematic and repeatable errors by computer systems that can cause and derive unfair outcomes, such as giving privilege to one group over another...Most often data is given as the first and primary factor behind algorithmic bias....Is it really that only data that is behind algorithmic bias or do we have other factors that contribute to that algorithmic bias?

The answer to that last rhetorical question is, of course, yes, a point picked up by Shakir Mohamed, Research Scientist with DeepMind, the AI research laboratory founded in acquired by Google in 2014:

I really like that definition of systematic and reproducible error. In those two words you can actually unpack the different kinds of components of where bias is coming in. Bias is coming in whenever there is systematic bias and error coming in. So, for example, the first one will be living in a society that has a set of biases already. That bias is going to be reflected in the mindset, in the thinking, in the way that people are approaching their work unless we are very careful. Data itself has a very important role in systematic bias, but bias is coming in many areas. It is in the way we are measuring. It is in what we are even considering worthwhile measuring. Sometimes we don't have a measurement, so we fill in what's missing instead. All of these can be sources of bias.

Then there is a third source of bias, which is in the actual technical computer algorithmic system itself. We make certain choices when we are deciding what variables to use. We are compressing the model, we are making choices as to how we are building the model, using one approach versus another one. They themselves can introduce bias. You have all these different factors, combining with each other and then what you get is effectively an artificial division, a system of difference is created which empowers some and disempowers other people. We do need to be careful. I think the question of bias is a very deep one, very multi-faceted and I think it's important that we remember the multi-faceted nature that it has.

Bias in algorithms mirrors the real world, suggested Katrina Ffrench, CEO of StopWatch, a coalition of legal experts, academics, citizens and civil liberties campaigners who work together to address what they define as excess and disproportionate stop and search and to promote best practice to ensure fair, effective policing for all:

I think we need to kind of zoom back into how these algorithms come about. If theres bias in society already and the status quo is unequal and then you produce mechanisms or use tools in the same fashion, you're likely to exacerbate it.

Ffrench cited as a case in point the Gangs Matrix, a database set up by the London Metropolitan Police following civil disturbance and rioting that took place back in 2011:

What basically happened was that the police decided that they needed to identify who was at risk of criminality, specifically serious violence, so they put together this database. The main issue that we found with the database is that it was definitely discriminatory. It used a very rudimentary XML spreadsheet, into which officers would put scores to do with the harm or risk that they calculated individuals to have.

Research by Amnesty International found that 80% of the people listed on the Gangs Matrix were 12-24 year olds, 78% were black and 99% were male - and 35% of people logged had never actually committed a violent offence. The police called the database a risk management tool to prevent crime and shared its data with other official agencies. This resulted, according to FFrench, in people being denied driving licences, university places, employment and in one one instance, a child being taken into care.

The Information Commissioners Office eventually ruled that the Met Police were in breach of data protection laws, but a lot of damage was done by that time, said Ffrench:

It just felt wholly disproportionate. What the police were doing was using AI, using policing tech, to justify discriminatory policing and then most people in civil society, the young people impacted, had no understanding of it, it was incredibly difficult to challengeyou have human rights and those were breached and that's where I'm really fearful for AI and tech and the lack of transparency and the impact it can have on people's lives. Without information, [people] have no idea what they're subjected to.

So far, so depressingly familiar. But the panel then turned their attentions to what might be done to redress the balance. The temptation with examples of algorithmic bias, as in the case of the UK exams scandal, is that when they are exposed, the brakes are slammed on and a policy u-turn takes place, a practice that doesnt tackle the underlying problem. This infuriates Mind Senses Amimer:

I get really frustrated when I hear about an AI tool or an algorithm that has been shelved or has been just ditched because it was biased. I understand that if there is bias, obviously the algorithm shouldn't be in use in the first place. But where I get frustration is surely someone, somewhere could have done something about it? Is the answer always to ditch? Don't we have the power to address and to fix the bias, so we have a bias free algorithm or bias free AI?

DeepMinds Mohamed was of a similar view that u-turns are not the answer:

The way we're going to address this particular kind of problem is going to need to be at every level. It's going to be at the technical level, at the organizational level, at the regulatory level, at the societal and grassroots level. I really think the first thing we need to do is build a very broad coalition of people, coalitions between people like me who are technical designers and expert people who are on the ground who understand and see the distress [bias can cause].

He pointed to the push back against facial recognition as a case in point:

Over the last five years or so we've seen that kind of coalition from amazing women in their fields, black woman who saw this distress, wrote papers to expose the issue and then five years later building those coalitions. Every company now has decided we're not going to be involved in facial recognition. Cities and states themselves have decided to ban facial recognition. So the first solution - and maybe the hardest work - is to do that kind of broad coalition.

And its important to remember that AI and algorithms can be used to good effect, said Naomi Kellman, Senior Manager for Schools and Universities at Rare Recruitment, a specialist diversity recruitment company which aims to help employers build workforces that better reflect diversity in society:

We have built what we could also call an algorithm in the form of the Context Recruitment System. Originally top employers in certain sectors tended to look for certain type of grade profile and also certain types of work experience. That appears color blind, but it's not because we know some people have more access to good education and good opportunities. What we were able to do is build a system that looks at people's achievement in context.

So it looks at the school you went to and says, 'What does A,A,B look like in your school? Is that what everyone gets or is that the best grade anyone's got for the past few years?'. We can highlight to employers when somebody has actually outperformed in a school situation that maybe doesn't tend to produce good grades. We also collect data on people's socio-economic status - if they've been eligible for free school meals or if they grew up in the care system or if they came to the country as refugee, all things that we know have an impact on people's chances of achieving academically and we can put things in context.

This is encouraging businesses to take a wider perspective on recruiting talent, she said:

The organizations that use it now see that they interview a much broader group of people, because instead of having a very basic algorithm that says three A's or you're out, they now use all of this data to say, Actually this person has high potential, because we're looking at more data points and that means more people get hired from a wider range of backgrounds. Students are coming to see it being used in graduate recruitment and also at university level. Universities now do contextualization and they're looking to expect that from employers. So I think it's about thinking about how we can use data to broaden opportunities for people and to put things into context.

Context is key certainly. What Kellman and her organization are talking about is a very worthy goal, but a long term one that will require a lot of changed perspectives from employers in the tech space, some of whom still have lamentable track records on the diversity front. As StopWatchs Ffrench noted:

I think it's about diversity and representation. That's about tech companies doing more to recruit and to retain and to promote black professionals...until we're in those spaces, we're gonna find that these things keep replicating themselves.

Read the original here:
Algorithmic bias - how do we tackle the underlying problem that inhibits the full potential of AI? - Diginomica

Read More..

Family Hardship Helps Inspire Student’s Sense of Wonder and Appreciation for the Mind and Body. – Bethel University News

Ogden is also grateful that Bethel allowed her to merge her love of science and her faith. I love that I am able to have deep conversations on a regular basis about faith and science with friends that I have made here, she says. Ogden has been involved in Bethels Science and Religion Clubs Christmas and Easter events, and shes able to explore the intersection of science and religion in classes. But shes also able to blend in her love of theology and philosophy. I love that my interests in theology and science can inform one another and that I am allowed to wrestle, ask hard questions, and evolve my thinking, she says.

Through classes and clubs, Odgen has been able to follow many eclectic interests at Bethel. She loves writing poetry and was published in The Coeval, Bethels student literary journal, and she says her love of artistic expression helps her as a scientist. I am also a creative, and I think that inviting my creativity into intellectual and scientific spaces improves my writing and helps me spread a sense of wonder and curiosity about the world, she says. Along with the creativity, I think that my analytical nature and open-mindedness make me a stronger scientist and truth-seeker. Ogden also loves climbing, and she has been involved with Bethels Beta Climbing Club. She joined the Oxford University Mountaineering Club, too, even climbing in Snowdonia National Park in Wales.

Ogden is still early in her educational journey, but shes received numerous opportunities already. She was recently one of two recipients from a pool of 350 to receive Women in Science and Technology Scholarships through Watermark. Along with an affirmation of her journey, Ogden says the scholarship helps ease her financial burden so she can focus on her studies. After graduation from Bethel, Odgen hopes to return to Oxford to pursue her Master of Science in Clinical and Therapeutic, and then she plans to start an MD/Ph.D. program and eventually complete a four-year neurology physician-scientist residency. Though that means she likely wont be a board-certified physician until her 30s, Ogden remains committed to one day serving families like her own. After honing my research skills through the lengthy art of becoming a physician-scientist, I hope to work on clinical trials for diseases similar to HD, she says. Hopefully there will be a cure by the time I am in practice.

See the original post:
Family Hardship Helps Inspire Student's Sense of Wonder and Appreciation for the Mind and Body. - Bethel University News

Read More..

Enlightening New Book ‘The New Prophet’ Provides Deep Meditative Truths to Awaken the Heart – GlobeNewswire

Cover of "The New Prophet" by Kevin MacNevin Clark

MANASSAS, Va., Oct. 26, 2020 (GLOBE NEWSWIRE) -- After studying Kahlil Gibrans The Prophet, author and addiction and trauma specialist Kevin MacNevin Clark was inspired to write a modern version of the book focusing on topics related more to the internal human experience. In his debut book, The New Prophet, Mr. Clark presents deep meditative truths through the use of thoughtful metaphors and imagery around feelings such as guilt, shame, diversity, collective trauma and more that are relevant to current times as people are becoming more in touch with their emotions.

The New Prophet follows the great counselor Ishala as he returns to his hometown where he had spent many years healing wounds and providing hope to so many as he knew his final days were upon him and wanting to spend them with family. His beloved son Ezekiel sat with him through his last days, and together they shared a sacred conversation in which the son asked his father to bestow upon him his wisdom regarding the human condition. The counselor, a man who lived a life of love and service with each word he spoke and each breath he took, recounted parables and enriched his meaning through metaphor as he left his son this beautiful parting gift.

I believe there are no hopeless cases and my book is as much for those hurting as it is for the hopeful, said Mr. Clark. It will help people find emotional freedom, develop spiritually and heal hurts.

The New Prophet channels Kahlil Gibran's poetic style to provide a new perspective on commonly held attitudes and beliefsWhereas Gibran's The Prophet was a poetic celebration of what makes us human, The New Prophet is a poetic road map to remind us how we reclaim our humanity and happiness in a materialistic and individualistic world. 5-star Amazon review

Mr. Clark presents a much-needed message during these times of uncertainty in the world through his thought provoking and meditative book which provides comfort and encouragement to spiritual seekers looking to achieve a state of spiritual alignment, those in recovery, people seeking creative inspiration and more.

The New ProphetBy Kevin MacNevin ClarkISBN: 978-1-9822-5415-5 (sc); 978-1-9822-5414-8 (hc); 978-1-9822-5419-3 (e)Available through Amazon, Barnes & Noble, and Balboa Press

About the AuthorKevin MacNevin Clark found deep purpose through his work in the behavioral health field, specializing in treating addiction and trauma. He holds a degree in psychology from George Mason University and has been on his own path of awakening since 2005, getting sober and entering recovery in 2006. He founded Excelsior Addiction Services LLC in 2020 and resides in Virginia with his family, living by his guiding philosophy that there are no hopeless cases. The New Prophet is his first book and he is currently working on his next one. To learn more please follow Mr. Clark on Facebook and visit http://www.excelsioraddictionservices.com.

Balboa Press, a division of Hay House, Inc. a leading provider in publishing products that specialize in self-help and the mind, body, and spirit genres. Through an alliance with the worldwide self-publishing leader Author Solutions, LLC, authors benefit from the leadership of Hay House Publishing and the speed-to-market advantages of the self-publishing model. For more information, visit balboapress.com. To start publishing your book with Balboa Press, call 877-407-4847 today.

Continued here:
Enlightening New Book 'The New Prophet' Provides Deep Meditative Truths to Awaken the Heart - GlobeNewswire

Read More..