Page 1,345«..1020..1,3441,3451,3461,347..1,3501,360..»

Vivek Oberoi speaks his heart out on lobbying and bullying, 20 years after his infamous press conference: It been a hallmark of our industry -…

On April 1, 2003, actor Vivek Oberoi did the infamous press conference wherein he uncovered some harsh realities of Hindi film industry. And now, 20 years later the conversation around lobbying and bullying has resurfaced, this time courtesy actor Priyanka Chopra Jonas, who in a recent interview, said that she was, being pushed into a corner in the industry and had beef with people. Im glad that I emerged from that. I kind of came up through the trial by fire, and survived it. But not everybodys gonna be that lucky, Oberoi confesses.

Recalling his journey aa it marks 20 years since the conference, he says, In hindsight, I went through a lot of stuff that was unnecessary. A lot of lobbies, a lot of repressive stories - kind of what Priyanka has been alluding to also. Thats been a hallmark of our industry, unfortunately. Its been one of the dark sides of our industry. And Ive been on the receiving end of it. I know how frustrating it is, it can make one feel extremely, exhausted and tired. You feel like, Ive just delivered an award winning, commercially successful performance in Shoot out Lokhandwala and after that, Im sitting at home for 14 months, not getting any work. When I went through that, I kept thinking, I want to do something beyond, I wanna do something empowering, something that takes me beyond that. Oberoi shifted his focus to philanthropy and business . He adds, Priyankas latest statement is so inspirational in terms of, finding a new space. She went out and explored something different and got out of a rut and, something magical happened for her personally and professionally.

The Dharavi Bank actor admits that bullying and other disgraced practices can kill young talents at a budding stage.

The industry is a very insecure place. Artists by nature live in a very fragile state because theyre more vulnerable. Whether it was the MeToo movement, the casting couch, or just bullying, lobbying - all of these things take the joy out of the creativity. Im glad these things are being spoken about and slowly will go away, he opines.

While twenty years ago the woke culture was missing, the 46-year-old confesses that the tsunami of fans support now scares a lot of people. Back in the day when I voiced against it, every well wisher would call me up and say, Dont talk about it. It is like a family secret. But if you have abuse going on in the family and you dont speak up about it because its a family secret? Thats stupid. How will the abuse get fixed? So, its a systemic issue in the industry which is getting better. People are being more vocal. There is more decentralization of power. Less and less people can play God and the fans are now aware. Sushant (Singh Rajput, late actor) should have never lost his life no matter what. Its just so sad. What a talented young guy and he should have had a better network of friends. You call the industry, a family, then the family should be there for each other, he signs off.

Visit link:

Vivek Oberoi speaks his heart out on lobbying and bullying, 20 years after his infamous press conference: It been a hallmark of our industry -...

Read More..

Google denies Bard was trained with ChatGPT data – The Verge

Googles Bard hasnt exactly had an impressive debut and The Information is reporting that the company is so interested in changing the fortunes of its AI chatbots, its forcing its DeepMind division to help the Google Brain team beat OpenAI with a new initiative called Gemini. The Informations report also contains the potentially staggering thirdhand allegation that Google stooped so low as to train Bard using data from OpenAIs ChatGPT, scraped from a website called ShareGPT. A former Google AI researcher reportedly spoke out against using that data, according to the publication.

But Google is firmly and clearly denying the data is used: Bard is not trained on any data from ShareGPT or ChatGPT, spokesperson Chris Pappas tells The Verge.

Pappas declined to answer whether Google had ever used ChatGPT data to train Bard in the past. Unfortunately, all I can share is our statement from yesterday, he says.

According to The Informations reporting, a Google AI engineer named Jacob Devlin left Google to immediately join its rival OpenAI after attempting to warn Google not to use that ChatGPT data because it would violate OpenAIs terms of service, and that its answers would look too similar. One source told the publication that Google stopped using that data after his warnings. Perhaps it threw out that portion of the training, too.

Update March 30th, 2:02PM ET: Google would not answer a follow-up question about whether it had previously used ChatGPT data form Bard, only that Bard isnt trained on data from ChatGPT or ShareGPT.

Follow this link:
Google denies Bard was trained with ChatGPT data - The Verge

Read More..

The Rise of AI Chatbots in Hearing Health Care : The Hearing Journal – LWW Journals

One of the most exciting recent technological innovations has been the deployment of artificial intelligence (AI) chatbots based on large language models (LLMs). AI chatbots are a type of generative AI that can generate text. Other examples of generative AI create pictures (e.g., DALL-E or Stable Diffusion; see Figures 1 and 2) or music (e.g., Jukebox). In November 2022, Open AI launched ChatGPT publicly, an AI chatbot that can engage in conversations in response to questions from the user, so-called prompts, generating responses to users questions (i.e., prompts) that are almost indistinguishable from those of humans. The launch of ChatGPT represents a technological revolution, one that could change the face of health care as we know it, including hearing health care. ChatGPT is not an isolated example but part of a global race of who can develop the most compelling AI chatbot. Besides Open AI, which is financed by Microsoft, other large corporations such as Meta, Google, and Tencent have launched similar proprietary products based on LLMs (e.g., LaMDA).

DALLE-created artwork. Prompt: The rise of AI chatbots in hearing healthcare, digital art. ChatGPT, AI, artificial intelligence, hearing health care, chatbots.

DALLE-created artwork. Prompt: A futuristic illustration of the Planet of the AI chatbots in hearing health care, a movie about invading an ear that is also a hospital. ChatGPT, AI, artificial intelligence, hearing health care, chatbots.

AI Chatbots in Hearing Health Care Applications, Risks, and Research Priorities for Patients, Clinicians, and Researchers.

AI chatbots are computer programs that use natural language processing (NLP) to communicate with humans. They are trained on large collections of language (e.g., all written books and most of the internet) to predict what response is most likely to a wide range of queries. For a human user, it may appear as if the system understands the question and can provide personalized advice, recommendations, and support. In reality, chatbots have no understanding of the world around them nor of the human body and its health status. Still, the potential applications for AI chatbots in health care are broad, with use cases for patients, clinicians, researchers, and training students. 1

The broad trend for the use of AI chatbots in health care is to increase accessibility (to medical knowledge) and affordability of care. Chatbots can provide 24/7 access to health care advice and support, reducing the need for in-person consultations, and potentially improving patient outcomes. Additionally, AI chatbots could potentially provide valuable insights and data to health care professionals, allowing them to make more informed decisions about patient care. More transparency on the data these chatbots have access to and use to produce their output is important and has been raised as a concern regarding existing systems. 2

In hearing health care, chatbots could be used to support patients, clinicians, and researchers (Table 1).

Patients can benefit from AI chatbots in hearing health care in various ways. One potential application is for initial screening and the recommendation of interventions. For example, a patient could interact with a chatbot that asks about their symptoms and hearing history and provides recommendations for self-management of symptoms, further evaluation, or treatment based on the patients responses. 3 This could be particularly useful in cases where patients are unsure whether or not they are experiencing hearing loss, or are hesitant to seek medical attention, or where a profound hearing loss inhibits a conversation with a clinician. Chatbots can also serve as educational resources, self-management tools and screening tools for comorbidities, including social needs. 4 Patients can receive information about hearing health, prevention tips, and advice on how to manage hearing conditions. Chatbots can provide information on the use of management options such as hearing aids, how to change batteries, and troubleshooting common issues. However, a potential risk is that chatbots may not provide accurate recommendations, leading to delayed diagnosis or inappropriate treatment.

Clinicians can benefit from AI chatbots in hearing health care in various ways. Chatbots can assist with data collection and analysis by collecting data on patients hearing health, such as self-reported symptoms or hearing aid usage. Chatbots can provide summary reports or visualizations to help clinicians make treatment decisions, such as providing a summary report of a patients hearing test results, highlighting areas of concern, and providing recommendations for further evaluation or treatment. Another potential application is to assist with decision-making and treatment planning. For medical applications, Google and Deep Mind developed Med-Palm, a LLM that incorporated clinical knowledge that has been evaluated using newly developed benchmarks. 5 Chatbots that unlock clinical knowledge could suggest treatment options based on a patients hearing health history and symptoms and provide information on the benefits and risks of each option. For instance, chatbots could suggest a specific type of treatment based on a patients hearing test results and preferences. Chatbots can also support clinicians in their communication of information in more accessible and person-centered ways.

A potential risk is that chatbots may not provide the same level of clinical judgment and decision-making as a human health care professional. Additionally, there is a risk that the data collected by chatbots may be inaccurate, incomplete, biased, or dated, which could lead to misdiagnosis or inappropriate treatment.

Researchers can benefit from AI chatbots in hearing health care in various ways. Chatbots can collect large amounts of data from diverse populations, providing researchers with valuable insights into the prevalence and impact of hearing loss. For instance, chatbots can potentially collect data on the prevalence of tinnitus in different countries or regions. Another potential application is to facilitate clinical trials and research studies. Chatbots can screen potential participants for eligibility, collect informed consent, and administer study protocols. 4 For example, chatbots can collect self-reported data on hearing aid usage and satisfaction in large-scale clinical trials.

However, a potential risk is that the data collected by chatbots may be incomplete or biased, particularly if the chatbots are only accessible to certain populations or if the questions asked by the chatbots are not culturally sensitive or appropriate for all participants. 2 Additionally, chatbots may inadvertently exclude certain populations from research studies, such as individuals who do not have access to technology or who are not comfortable using it.

There is an urgent priority to investigate the (clinical) application of AI chatbots in hearing health care. General guidelines for the appropriate use of AI chatbots by researchers are being developed in this rapidly changing landscape. Academic journals have broadly agreed that chatbots may not be coauthors on research papers since they cannot take responsibility for their work. 2,6 In terms of hearing research applications, priority should be given to evaluate the validity and reliability of chatbots in collecting and analyzing hearing health data. Researchers and clinicians need to ensure that chatbots can provide accurate recommendations and treatment options and that the data collected by chatbots are reliable.

Usability is another important research priority to ensure that chatbots are user-friendly and accessible to as many patients as possible, regardless of their age or technological literacy. Cultural sensitivity is also important to ensure that chatbots are culturally sensitive and appropriate for all populations. There are also important ethical considerations for using chatbots in hearing health care, including issues related to informed consent, data privacy, and data security. Researchers will also need to assess long-term outcomes of using chatbots in hearing health care. This includes evaluating the impact of chatbots on patient outcomes such as quality of life, satis-faction, and adherence to treatment. Overall, the research priorities for AI chatbots in hearing research should focus on ensuring that chatbots are accurate, reliable, accessible, and culturally sensitive.

Guidelines for appropriate use of AI chatbots by clinicians or patients are not yet available. As the language models have been trained largely by using text from the internet, they are likely to have the same general opinions, stereotypes, and biases that are present on the internet. For this reason, we see a task for specialists and patient organizations to test what prompts yield the best results and provide the guidelines to avoid misuse or misunderstandings.

The rise of AI chatbots (based on LLMs) represents a significant technological advancement that has the potential to revolutionize hearing health care. AI chatbots have the potential to provide personalized advice and support to patients while also providing valuable insights and data to health care professionals. However, it is important to consider the potential risks and benefits of AI chatbots and to prioritize further research to ensure that these technologies are used ethically, effectively, and safely in hearing health care.

We would like to acknowledge the contribution of ChatGPT, an AI chatbot trained by Open AI using a large language model (LLM), in providing valuable insights and guidance for this article. We experimented with prompt engineering and had conversations with ChatGPT playing the role of patient and clinician to get a first impression of what AI chatbots, such as ChatGPT, could and could not offer.

More:
The Rise of AI Chatbots in Hearing Health Care : The Hearing Journal - LWW Journals

Read More..

9 signs you’re a deep thinker whose mind works differently – Hack Spirit

If thinking were an Olympic sport, Im sure Id be in with a chance of a medal.

And believe me, Im certainly not bragging here.

In fact, there have been plenty of occasions I wish my deep thinking came with an off switch.

I suspect every deep thinker can relate to the chattering voice that on occasion they wished would just shut up.

But the truth is that I also love my ability to think deeply.

Not only do I think it makes me a more interesting person, but it also brings a richness to life that I wouldnt want to be without.

Maybe you can relate?

If youre a deep thinker, there are most likely certain signs that you recognize all too well.

It probably comes as know surprise. When you spend so much time in your own head, you tend to get to grips with what makes you tick.

Deep thinkers are reflective.

They are naturally analytical and so they spent time considering their strengths and weaknesses too.

When you have a habit of self-reflection, it means you have high intrapersonal intelligence.

You take the time to think about your thought processes and your feelings. And this introspection builds your self-knowledge.

To me, self-awareness is the greatest gift to arise from deep thinking. Because it brings us the potential for change.

Only when we get to know ourselves can we honestly evaluate ourselves, our lives, and the world we live in.

Deep thinkers are tapped into the subtleties of life.

So they can observe the most minute of details.

Basically, theyre not only good at reading themselves, but theyre also good at reading the room in general.

This can give deep thinkers the gift of social awareness. Because with depth often comes heightened perception.

You may notice that youre a good judge of character and can suss people out quickly. You can probably pick up on someone elses energy or intentions.

What youre actually doing is reading the little signs that maybe other people miss.

Deep thinkers can be detail orientated because they have a habit of closely studying.

First off, can we please do away with the myth that introverts are shy or even quiet.

Sure, some are. But plenty of others are not.

For years I let people tell me I was an extrovert, just because I am a natural communicator, have lots of opinions, and Im far from timid.

But they were so wrong.

Because rather than be a personality type, introversion is so much more.

Introverts brains are wired differently.

Research has found we process stimuli differently and have longer neural pathways.

So it can be more complicated and take longer for our brains to process interactions.

Thats why we need plenty of alone time to recharge, and why we find it stimulating enough simply to be alone in our thoughts.

But what has this all got to do with deep thinkers?

Deep thinkers are often introverts because the very definition of introversion is that your energy tends to be more focused on your own inner world.

So there is often a big cross-over between introversion and deep thinking.

What are introvert tendencies?

Were talking about things like:

There are always two sides to every coin.

As a deep thinker myself I wholeheartedly believe its strengths outweigh its burden.

But I wont pretend it doesnt have its downsides at times.

Personally, my habit of deep thinking needs to be reined in sometimes.

Otherwise, I can fall into stress, low-level anxiety and unnecessary worry. My mind quickly spills over into hypervigilance and over-planning.

The reality is that thinking is hard to simply switch off.

So deep thinking can turn into overthinking or even rumination.

I overwhelm and flood my brain contemplating or trying to preempt. And like an overheating laptop, that stops your brain from functioning properly.

Meditation, breathwork, yoga, journaling, and exercise have become vital tools in my belt to nip overthinking in the bud and give my active brain a rest.

Introverts and deep thinkers often have a natural tendency to enjoy being alone.

It gives them time to spend time contemplating their feelings and thoughts. After all, deep thinking isnt a group activity.

But when youre a really deep thinker, you might find that too much time alone can be bad for your mental health too.

Because as weve seen, deep thinking can slip into overthinking. And youre more likely to do this when you have a lot of time on your hands.

Deep thinkers are often perfectly happy to do very little.

Its not that theyre boring, quite the opposite. Researchers have even found this is a sign of intelligence.

They dont need to be constantly doing something to feel stimulated. Their thoughts provide them with plenty of stimulation.

But just like the yin and yang of life, balancing this with staying active can be important to our well-being.

That way, we dont get too lost in thought.

Moving our bodies, getting lost in activities and the company of other people can pull us back into the present.

Deep thinkers tend not to see the world in black and white. They see all the nuanced shades of grey in between.

You might be naturally good at playing devils advocate.

You arent hasty in drawing conclusions.

You prefer to contemplate the deeper implications of something before making a decision.

This is a great skill to have in life. It ultimately promotes open-mindedness.

Not only that but it encourages empathy.

When we make an effort to understand and contemplate where other people are coming from, its easier to connect with them.

The flip side of seeing life from different angles can be indecisiveness.

When you recognize life isnt so simple, you spend a lot of time analyzing and contemplating your options.

In many circumstances, this is wise. As the saying goes, only fools rush in.

So the ability to break things down and contemplate them logically can be handy.

Unfortunately in other situations, thinking ourselves around in circles probably does little good.

For example, research has suggested for really complicated decisions, going with your gut can be a much better strategy.

Thats because intuition is far more logical than we often give it credit for.

Its not emotional or impulsive. Its actually our unconscious thats at work.

In an instance, its accessed a vast warehouse of information and experiences that are neatly (yet silently) stored away in the back of our brains.

It then presents this to you with a gut feeling about something.

And studies have found it can be a really effective way to make decisions, instead of getting stuck in contemplation and uncertainty.

Curiosity is the great fuel that feeds a deep thinkers mind.

Its a bit like the child who is forever asking but why?.

Your thirst for knowledge can feel insatiable. You just love to figure things out.

There is always another layer to unpeel. There is always another mystery in life to uncover.

You most likely find learning fascinating, regardless of the subject.

Because its the newness of the information or perspective that interests you more than the topic itself.

Everything you learn offers you more ideas and thoughts to contemplate.

Deep thinkers very rarely take things at face value. They have inquisitive natures that cant help but delve further below the surface.

Why isnt merely a question you ask, its a state of mind you adopt.

And that state of mind is one of discovery and curiosity.

At the end of the day, our thoughts power our emotions.

So its very little surprise that deep thinking often leads to deep feeling too.

Deep thinkers are excellent at uncovering a richness to life. They go deeper in every sense, and that means on an emotional level too.

Youre also highly tuned in to other people, and youre extremely conscious of your environment and surroundings.

So in many ways, you have a natural antenna thats going to pick up on a lot.

They say that ignorance is bliss because it shields you from so many things.

But as a deep thinker, you dont (in fact, you cannot) hide. Instead, you face and contemplate all the many facets of life.

Sensitivity is your superpower, and yes, at times your cross to bear as well.

Read the rest here:
9 signs you're a deep thinker whose mind works differently - Hack Spirit

Read More..

The state of artificial intelligence: Stanford HAI releases its latest AI … – SiliconANGLE News

The Stanford Institute for Human-Centered Artificial Intelligence today released the latest edition of its AI Index Report, which explores the past years machine learning developments.

Stanford HAI, as the institute is commonly known, launched in early 2019. It researches new AI methods and also studies the technologys impact on society. It releases its AI Index Report annually.

The latest edition of the study that was published today includes more than 350 pages. It covers a long list of topics, including the cost of AI training, efforts to mitigate bias in language models and the technologys impact on public policy. In each area that it surveys, the report points out multiple notable milestones that were reached during the past year.

The most advanced neural networks have become more complicated over the past year. Stanford HAI points to Google LLCs Minerva large language model as one example. The model, which debuted last June, features 540 billion parameters and took nine times more compute capacity to train than OpenAI LPs GPT-3.

The growing hardware requirements of AI software are reflected in the rising cost of machine learning projects. Stanford HAI estimates that PaLM, another Google model released last year, cost $8 million to develop. Thats 160 times more than GPT-2, a predecessor to GPT-3 that OpenAI released in 2019.

Though AI models can perform significantly more tasks than a few years ago, they continue to have limitations. Those limitations span several different areas.

In todays report, Stanford HAI highlighted a 2022 research paper that found advanced language models struggle with some reasoning tasks. Tasks that require planning are often particularly challenging for neural networks. Last year, researchers also identified many cases of AI bias in both large language models and neural networks optimized for image generation.

Researchers efforts to address those issues came to the fore in 2022. In todays report, Stanford HAI highlighted how a new model training technique called instruction tuning has shown promise as a method for mitigating AI bias. Introduced by Google in late 2021, instruction training involves rephrasing AI prompts to make them easier to understand for a neural network.

Last year, researchers not only developed more capable AI models but also found new applications for the technology. Some of those applications led to scientific discoveries.

In October 2022, Googles DeepMind machine learning unit detailed a new AI system called AlphaTensor. DeepMind researchers used the system to develop a more efficient way of carrying out matrix multiplications. A matrix multiplication is a mathematical calculation that machine learning models use extensively in the process of turning data into decisions.

Last year also saw scientists apply AI to support research in a range of other areas, Stanford HAI pointed out. One project demonstrated that AI could be used to discover new antibodies. Another project, also led by Googles DeepMind, led to the development of a neural network that can control the plasma in a nuclear fusion reactor.

Stanford HAIs new report also dedicates multiple chapters to the impact of AI on society. Though large language models have only entered the public consciousness in recent months, AI is already making an impact across several areas.

In 2021, only 2% of federal AI-related bills proposed by U.S. lawmakers were passed into law. Last year, that number jumped to 10%. At the state level, meanwhile, 35% of all AI-related bills passed in 2022.

The impact of machine learning is also being felt in the education sector. According to Stanford HAIs research, 11 countries have officially endorsed and implemented a K-12 AI curriculum as of 2021. Meanwhile, the percentage of new computer science Ph.D. graduates from U.S. universities who specialized in AI nearly doubled between 2010 and 2021, to 19.1%.

More:
The state of artificial intelligence: Stanford HAI releases its latest AI ... - SiliconANGLE News

Read More..

Write Team: Opening your mind to meditation – Shaw Local News Network

About 18 months ago, I decided I wanted to dive into the world of meditation.

Meditation is a concept I have heard a lot about as I have begun a journey of personal growth and learning how to stay in the present moment, rather than thinking way in the past or too far into the future. When I first started meditating, I did buy myself a few treats, like an array of crystals that are good for different energies. I also bought a small meditation cushion, a salt lamp, essential oils, and a mat as well. A dark tapestry graced the walls and a silk scarf dressed a lamp to create soft light. I was ready to meditate!

However, meditation is not as easy as I thought it would be. Without a guide, I was completely lost.

One of my favorite apps on my self-improvement journey is called Calm. This app offers a wide variety of options from sleep stories, background music and different meditations geared toward a variety of topics from dating advice to leadership. This app costs about $14.99/month but there are different deals and coupons offered throughout the year.

Jeff Warren is the creator of the Calm App, and he has a Daily Trip that is a daily meditation ranging from 7 to 10 minutes with a specific topic in mind. Each night, I am excited to hear his next meditation. Some of the topics include wanting to be exactly where you currently are, how to combat loneliness and breathing.

Breathing is such an important concept in meditation. Each meditation typically starts with three slow deep breaths. Breathing, a tool we have at our constant disposal, is something I rarely remember to focus on, but I am definitely improving.

Many famous names grace the Calm App, such as sleep stories by Harry Styles and Camila Cabello. There are also sleep stories about famous painters such as Frida Kahlo. I like listening to the different music options, such as a particularly relaxing version of Circles by Post Malone. Shawn Mendes even has a mix readily available for studying, meditation, relaxation or even reading.

LeBron James also has a section in the Calm App which talks about training your mind, the importance of routine and ritual, time management and personal boundaries. He talks about how when you have a set schedule of what is important to you, then it is easier to say no to possible commitments that may not suit you or interest you.

One more top name to mention from the Calm App is Jay Shetty. He has two books out I have read and thoroughly enjoyed; his first book is called Think Like a Monk and his second is entitled 8 Rules of Love. Jay Shetty has taught me that we all have struggles, worries and concerns. However, we can channel our worries into powerful energy. Rather than focusing on the negative, we have the option to be more positive. Happiness can be a choice.

Meditation has helped me slow down. When I meditate, I like to picture myself in a cool stream, letting the water glide through my fingers. Just as thoughts pass through my mind, I can let the water run through my fingers without clinging to it. 10% Happier, a book by Dan Harris, taught me that there is no cure-all for life. However, we can do things that make us just a little bit happier, like meditate, spend more time outside, be with loved ones, and be grateful for what we do have.

This can be easier said than done, but with daily practice, a calmer sense of mind can emerge.

Brittany Muller is a pre-kindergarten/kindergarten teacher at Lighted Way in La Salle. She lives in Peru and enjoys writing and has worked on small school newspapers for much of her life.

Read more from the original source:
Write Team: Opening your mind to meditation - Shaw Local News Network

Read More..

Hypnotic Trailer: Ben Affleck Stars in Mind-Bending Action Thriller From Robert Rodriguez – Variety

After a peek at director Robert Rodriguezs action thriller Hypnotic at this years SXSW, audiences wont have to wait much longer to catch the full version in theaters this spring.

IGN has unveiled the first official trailer for the upcoming psychological thriller, which follows police detective Daniel Rourke (Ben Affleck) as he searches for his missing daughter Minnie (Hala Finley). He soon learns she is associated with a series of ongoing robberies conducted by a mysterious man (William Fichtner) with hypnotic powers.

The trailer gives audiences a look at how Afflecks frantic search for his missing daughter. The desperate dad slowly begins to spiral out of control once his investigation pushes him to confront his deepest, darkest fears. With assistance from psychic Diana Cruz (Alice Braga), Daniel sets off to pursue the mysterious man during his train of robberies, and get Minnie home safe. As he finds out in the trailer, people with hypnotic powers can force their victims to see and feel things that arent real.

Affleck, Finley, Fichtner and Braga are joined by Jeff Fahey, Kelly Frye, JD Pardo, Bonnie Discepolo, Dayo Okeniyi, Derek Russo and Corina Calderon.

Following its SXSW premiere, Hypnotic received favorable reviews, with Varietys Peter Debruge writing that the typical popcorn-munching multiplex patron would never suspect how deep this Russian-doll mystery goes. Better to strap in and go along for the ride in the latest example of creativity-within-constraints.

Watch the Hypnotic trailer below. The film is set to debut in theaters on May 12.

The rest is here:
Hypnotic Trailer: Ben Affleck Stars in Mind-Bending Action Thriller From Robert Rodriguez - Variety

Read More..

The Future of AI: What Comes Next and What to Expect – The New York Times

In todays A.I. newsletter, the last in our five-part series, I look at where artificial intelligence may be headed in the years to come.

In early March, I visited OpenAIs San Francisco offices for an early look at GPT-4, a new version of the technology that underpins its ChatGPT chatbot. The most eye-popping moment arrived when Greg Brockman, OpenAIs president and co-founder, showed off a feature that is still unavailable to the public: He gave the bot a photograph from the Hubble Space Telescope and asked it to describe the image in painstaking detail.

The description was completely accurate, right down to the strange white line created by a satellite streaking across the heavens. This is one look at the future of chatbots and other A.I. technologies: A new wave of multimodal systems will juggle images, sounds and videos as well as text.

Yesterday, my colleague Kevin Roose told you about what A.I. can do now. Im going to focus on the opportunities and upheavals to come as it gains abilities and skills.

Generative A.I.s can already answer questions, write poetry, generate computer code and carry on conversations. As chatbot suggests, they are first being rolled out in conversational formats like ChatGPT and Bing.

But thats not going to last long. Microsoft and Google have already announced plans to incorporate these A.I. technologies into their products. Youll be able to use them to write a rough draft of an email, automatically summarize a meeting and pull off many other cool tricks.

OpenAI also offers an A.P.I., or application programming interface, that other tech companies can use to plug GPT-4 into their apps and products. And it has created a series of plug-ins from companies like Instacart, Expedia and Wolfram Alpha that expand ChatGPTs abilities.

Many experts believe A.I. will make some workers, including doctors, lawyers and computer programmers, more productive than ever. They also believe some workers will be replaced.

This will affect tasks that are more repetitive, more formulaic, more generic, said Zachary Lipton, a professor at Carnegie Mellon who specializes in artificial intelligence and its impact on society. This can liberate some people who are not good at repetitive tasks. At the same time, there is a threat to people who specialize in the repetitive part.

Human-performed jobs could disappear from audio-to-text transcription and translation. In the legal field, GPT-4 is already proficient enough to ace the bar exam, and the accounting firm PricewaterhouseCoopers plans to roll out an OpenAI-powered legal chatbot to its staff.

A New Generation of Chatbots

A brave new world. A new crop of chatbotspowered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning todays powerhouses into has-beens and creating the industrys next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacationsand translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images(and ace the Uniform Bar Exam).

Bing. Two months after ChatGPTs debut, Microsoft, OpenAIs primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bots occasionally inaccurate, misleading and weird responsesthat drew much of the attention after its release.

Ernie. The search giant Baidu unveiled Chinas first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flopafter a promised live demonstration of the bot was revealed to have been recorded.

At the same time, companies like OpenAI, Google and Meta are building systems that let you instantly generate images and videos simply by describing what you want to see.

Other companies are building bots that can actually use websites and software applications as a human does. In the next stage of the technology, A.I. systems could shop online for your Christmas presents, hire people to do small jobs around the house and track your monthly expenses.

All that is a lot to think about. But the biggest issue may be this: Before we have a chance to grasp how these systems will affect the world, they will get even more powerful.

For companies like OpenAI and DeepMind, a lab thats owned by Googles parent company, the plan is to push this technology as far as it will go. They hope to eventually build what researchers call artificial general intelligence, or A.G.I. a machine that can do anything the human brain can do.

As Sam Altman, OpenAIs chief executive, told me three years ago: My goal is to build broadly beneficial A.G.I. I also understand this sounds ridiculous. Today, it sounds less ridiculous. But it is still easier said than done.

For an A.I. to become an A.G.I., it will require an understanding of the physical world writ large. And it is not clear whether systems can learn to mimic the length and breadth of human reasoning and common sense using the methods that have produced technologies like GPT-4. New breakthroughs will probably be necessary.

The question is, do we really want artificial intelligence to become that powerful? A very important related question: Is there any way to stop it from happening?

Many A.I. executives believe the technologies they are creating will improve our lives. But some have been warning for decades about a darker scenario, where our creations dont always do what we want them to do, or they follow our instructions in unpredictable ways, with potentially dire consequences.

A.I. experts talk about alignment that is, making sure A.I. systems are in line with human values and goals.

Before GPT-4 was released, OpenAI handed it over to an outside group to imagine and test dangerous uses of the chatbot.

The group found that the system was able to hire a human online to defeat a Captcha test. When the human asked if it was a robot, the system, unprompted by the testers, lied and said it was a person with a visual impairment.

Testers also showed that the system could be coaxed into suggesting how to buy illegal firearms online and into describing ways to make dangerous substances from household items. After changes by OpenAI, the system no longer does these things.

But its impossible to eliminate all potential misuses. As a system like this learns from data, it develops skills that its creators never expected. It is hard to know how things might go wrong after millions of people start using it.

Every time we make a new A.I. system, we are unable to fully characterize all its capabilities and all of its safety problems and this problem is getting worse over time rather than better, said Jack Clark, a founder and the head of policy of Anthropic, a San Francisco start-up building this same kind of technology.

And OpenAI and giants like Google are hardly the only ones exploring this technology. The basic methods used to build these systems are widely understood, and other companies, countries, research labs and bad actors may be less careful.

Ultimately, keeping a lid on dangerous A.I. technology will require far-reaching oversight. But experts are not optimistic.

We need a regulatory system that is international, said Aviv Ovadya, a researcher at the Berkman Klein Center for Internet & Society at Harvard who helped test GPT-4 before its release. But I do not see our existing government institutions being about to navigate this at the rate that is necessary.

As we told you earlier this week, more than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter that A.I. tools present profound risks to society and humanity.

A.I. developers are locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one not even their creators can understand, predict or reliably control, according to the letter.

Some experts are mostly concerned about near-term dangers, including the spread of disinformation and the risk that people would rely on these systems for inaccurate or harmful medical and emotional advice.

But other critics are part of a vast and influential online community called rationalists or effective altruists, who believe that A.I could eventually destroy humanity. This mind-set is reflected in the letter.

Please share your thoughts and feedback on our On Tech: A.I. series by taking this brief survey.

We can speculate about where A.I. is going in the distant future but we can also ask the chatbots themselves. For your final assignment, treat ChatGPT, Bing or Bard like an eager young job applicant and ask it where it sees itself in 10 years. As always, share the answers in the comments.

Question 1 of 3

Start the quiz by choosing your answer.

Alignment: Attempts by A.I. researchers and ethicists to ensure that artificial intelligences act in accordance with the values and goals of the people who create them.

Multimodal systems: A.I.s similar to ChatGPT that can also process images, video, audio, and other non-text inputs and outputs.

Artificial general intelligence: An artificial intelligence that matches human intellect and can do anything the human brain can do.

Click here for more glossary terms.

Kevin here. Thank you for spending the past five days with us. Its been a blast seeing your comments and creativity. (I especially enjoyed the commenter who used ChatGPT to write a cover letter for my job.)

The topic of A.I. is so big, and fast-moving, that even five newsletters isnt enough to cover everything. If you want to dive deeper, you can check out my book, Futureproof, and Cades book, Genius Makers, both of which go into greater detail about the topics weve covered this week.

Cade here: My favorite comment came from someone who asked ChatGPT to plan a route through the trails in their state. The bot ended up suggesting a trail that did not exist as a way of hiking between two other trails that do.

This small snafu provides a window into both the power and the limitations of todays chatbots and other A.I. systems. They have learned a great deal from what is posted to the internet and can make use of what they have learned in remarkable ways, but there is always the risk that they will insert information that is plausible but untrue. Go forth! Chat with these bots! But trust your own judgment too!

Please take this brief survey to share your thoughts and feedback on this limited-run newsletter.

See the original post here:
The Future of AI: What Comes Next and What to Expect - The New York Times

Read More..

BNN SUMMARY OF THE WEEK: Deport or not deport. Blessed … – bnn-news.com

Politicians, lawyers and activists have commenced discussions about ways to punish Russian citizens that live in Latvia with permanent residence permits and who are unable or are unwilling to pass the state language exam.

It would seem the possibility of deportation for people whose residence permits run out is inevitable. It could have an impact on the countrys society (both for welcomers and deporters), as well as Latvias allies.

If you want a longer holiday but frequent court hearings interfere with your plans, just ask the judge. Follow Aivars Lembergs example.

In ancient times, before Crimea, NATO was generally referred as an umbrella under which to seek refuge. But no more now its a shield we are holding up ourselves. On the 19th anniversary of Latvias membership of this alliance the country received a valuable gift a new member. Finland will soon enter the alliance.

The Minister of Finance continues patiently convincing residents there is no way to go on without reviewing taxes. The review must go up, he says. He also delicately mentioned Estonia, where the general tax burden from GDP is about three percentage points higher than Latvias 30.8%. Higher taxes in Europe means more welfare. For everyone, the minister says.

Donald Trump, who is accused of accused him of concealing financial information, will enter the history of US presidents. He will be the first US President, though an ex-president at this point, to stand before a court of law. Even Richard Nixon managed to escape this fate by resigning.

Talks about Artificial Intelligence are no longer something seen exclusively in science fiction movies. But it is time to wake up the frightening aspect of this is exactly the prospect of AI technologies rapidly developing.

BNN gives you a summery of the most relevant events of the past week in the following topics: Deportations are nigh; Themis with benefits; Latvias shield; Almost Metallica; In welfares name; Nuclear neighbour; Hit the brakes!; Stormy consequences.

On Thursday, the 30th March, Latvias Saeima passed in the first reading amendments to the Immigration Law that provide certain minor reliefs for Russian citizens to make it easier for them to update their residence permits. The planned amendments caused sharp discussions among deputies certain politicians said the proposed changes will basically permit forced deportation of these people from the country.

69 deputies voted in favour of amendments, 12 voted against, and eight abstained in the vote. The final reading of amendments to the Immigration Law is planned for next week.

Jnis Dombrava from the National Alliance said the arguments from For Stability political party leader Aleksejs Rosikovs against the proposed amendments to the Immigration Law can be considered as pitiful Russian propaganda.

Opinion piece

The so-called Aivars Lembergs criminal case has been in review by Riga Regional Court for more than a year. Among the accused are ex-Mayor of Ventspils Aivars Lembergs, his son Anrijs Lembergs and once business partner Ansis Sormulis.

The court hearing of Monday, the 27th of March was organised, like many previous ones, using a video conference call. However, at the very start the usual trial process was interrupted by Aivars Lembergs. He asked the judge to cancel the court hearing that was unexpectedly scheduled for the 11th of April. The reason: it would interfere with his family already paid for Easter holiday trip.

The court, after consulting with the prosecution and defence about schedule, happily postponed this hearing to the 21st of April, perhaps partially because the prosecutor dared to attack Lembergs, asking why deputies, unlike him, are allowed to travel around during work time. The prosecution made it clear again that the honoured court would not allow any liberties, stressing that everyone involved must know their place

Wednesday, the 29th of March, marks 19 years of Latvia receiving the most effective means of protection for the nation and state Article 5 of NATO Treaty.

Latvia lacking its own aircraft carriers and fighter jets is part of the worlds strongest military force. At least the country sees it this way in these troubled times.

There have been talks from certain westerners that Latvia, and Baltic States in general, only consume NATO security, not contribute to it. There has also been an increase of scepticism among Latvian residents about the guarantees provided by Article 5: will the big and powerful members of the alliance truly rush to our aid is disaster strikes? We can see the answer to this in dai.

So what was Latvias main contribution to its partners these years?

Ticket prices for the XXVII Latvian Song and XVII Dance Celebration will be approximately 40% higher than they were in past years. Various estimates indicate as much. Nevertheless, the Cabinet of Ministers will need to review the price list before it is approved.

The price rise for various events is not equal. There are also events whose ticket prices have gone down. Unlike the celebration in 2013 and 2018, the range of tickets is down, specifically the cheapest tickets.

For example the tickets for the final concert in Meaparks will be presented in five price groups 20, 40, 60, 80 and 100 euros. Tickets for the concert in 2018 were presented in six price groups 15, 25, 35, 45, 55 and 65 euros. In 2013 there were seven ticket price groups, the cheapest cost 3 lats or 4.3 euros.

The volume of collected taxes in Latvia is insufficient to afford all of the outlined budget needs. This is why when working on the Tax Policy guidelines for 2024-2027, there will be discussions about all 14 active taxes, as Minister of Finance Arvils Aeradens told journalists on Wednesday, the 29th of March.

The minister explained there will be a review of all existing taxes. Offers from political and social partners will be considered as well. Aeradens explained that coalition and social partners put an emphasis on labour taxes. New Unity plans to propose a sustainable healthcare funding model in upcoming discussions.

Currently Aeradens does not predict how discussions might end. At the same time, he said discussions about labour taxes will be the most difficult.

The announcement from Russian President Vladimir Putin regarding the deployment of tactical nuclear arms in Belarus is yet another attempt to intimidate the West, so that they reduce support for Ukraine, said ex-Director of Latvias Constitutional Protection Bureau Jnis Kaoci in an interview to TV3 programme 900 seconds.

He explained that Russian armed forces have a tough time, because the intended three-day war in Ukraine has become too long and the successes achieved so far are few and weak. This is why Putin has decided to return to a tactic that would intimidate the West the most and force them to reduce direct support for Ukraine. This method is waving nuclear arms around.

In this case it is clear they [Russians] could have brought nuclear arms to Belarus sooner if they considered it necessary.

Leading figures in the world of modern technology want to stop training of the most powerful artificial intelligence (AI) systems, stating that there is a threat to humanity, writes the BBC.

Twitter and Tesla owner Elon Musk, Apple co-founder Steve Wozniak and Deep Mind researchers are among the signatories of the petition calling for a halt to training AI systems for at least six months.

OpenAI, which also developed ChatGPT, recently released GPT-4. It has impressed observers with its ability to perform simple tasks, such as answering questions about pictures or objects.

In a letter created by the Future of Life Institute and signed by supporters, it is said that the development of AI should be stopped at the current level for the time being, pointing to the risk that even more advanced systems may pose in the future: AI systems with human-competitive intelligence can pose profound risks to society and humanity.

Former US President Donald Trump has become the first ex-president to face criminal charges after a Manhattan court formally accused him of concealing financial information, writes Reuters.

At the moment, the exact wording of the indictment is not known, but the CNN television channel reported on Thursday, the 30th of march, that Trump faces indictments in more than 30 episodes. The former president, on the other hand, has declared that he is completely innocent and is not going to give up the fight for the position of president in next years elections. He accused the Democratic Party of trying to destroy his chances in the elections:

This is Political Persecution and Election Interference at the highest level in history.

View original post here:
BNN SUMMARY OF THE WEEK: Deport or not deport. Blessed ... - bnn-news.com

Read More..

Cyber Command Chief: AI, ML, Cyber Progress Critical to U.S. – MeriTalk

Top Department of Defense (DoD) officials told lawmakers during a House Armed Services subcommittee hearing last week that the U.S. needs to keep improving its capabilities in machine learning (ML), artificial intelligence (AI), and cybersecurity in order to maintain its current strategic advantage over other major nation-states.

Gen. Paul Nakasone, Commander of U.S. Cyber Command, said that the U.S. has made significant strides in staying ahead in the cyberspace competition, but it needs to continue making progress because adversaries including China and Russia continue to develop and execute more advanced cyberattacks.

The United States must keep improving its capabilities in this area, Nakasone said at a March 30 hearing of the Armed Services Cyber, Information Systems, and Innovation Subcommittee.

Nakasone also warned about the negative effects of pausing further AI developments something that some top private sector tech officials have pushed for in a recent letter that cites their fears that advanced AI may pose a threat to humanity.

Twitter CEO Elon Musk is among those who want the training of AI systems above a certain capacity to be halted for at least six months. Apple Co-Founder Steve Wozniak and some researchers at DeepMind also signed onto the letter created by the Future of Life Institute.

The letter calls on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

[AI ML] is resonant today and is something that our adversaries are going to continue to look to exploit, said Nakasone. He added that U.S. military forces now have a tenuous advantage over China in the realm of AI that would fray if private sector AI leaders halted development of their systems.

John Plumb, principal cyber advisor to the Secretary of Defense and assistant secretary of Defense for Space Policy at the DoD, also warned lawmakers that China represents the Pentagons primary pacing challenge, and that Russia remains an acute threat.

Since 2018, the Department has recognized that it is not enough to maintain a defensive posture while preparing for conflict, but that it must defend forward to meet adversaries and disrupt their efforts and competition, that is the daily struggle, Plumb said.

Today, Plumb explained, the DoD campaigns in and through cyberspace to sow doubt among competitors; conducts intelligence-driven hunt forward operations to generate insights into our competitors tactics, techniques, and procedures while defending U.S. Allies and partner computer networks; and disrupts malicious cyber actors through offensive cyber operations.

Plumb also emphasized how the Presidents Fiscal Year 2024 budget will enhance DoD cybersecurity, increase capacity for cyberspace operations, and advance research and development activities for new cyber capabilities.

These resources will go directly to supporting our cyber mission forces, protecting the homeland, and addressing the threats posed by our adversaries in cyberspace, Plumb said.

Go here to see the original:
Cyber Command Chief: AI, ML, Cyber Progress Critical to U.S. - MeriTalk

Read More..