Page 669«..1020..668669670671..680690..»

Ancient Wisdom Part 26: Mind-blowing benefits of deep breathing – Hindustan Times

Note to readers: Ancient Wisdom is a series of guides that shines a light on age-old wisdom that has helped people for generations with time-honoured wellness solutions to everyday fitness problems, persistent health issues and stress management, among others. Through this series, we try to provide contemporary solutions to your health worries with traditional insights.

Some ancient practices, once lost but now rediscovered, have significantly contributed to transforming our modern wellness journey. Deep breathing, dating thousands of years back, was practiced by Yogis in India and came to be known as Pranayama. Prana means life force and yama means control. By controlling your breathing, one can not only master mind but also keep several diseases at bay. Deep breathing is more relevant in today's world than ever before in the light of increasing stress, deteriorating air, decreased immunity and growing threat of chronic diseases like diabetes, asthma, heart disease, blood pressure among a host of other health issues.

Not just India, deep breathing has been linked with health and vitality, and also as a way to connect to divine in different countries for centuries. China's Qigong practice involved moving meditation, deep rhythmic breathing, and a calm meditative state of mind. In ancient Greece, deep breathing was called pneuma. As per Egyptians, deep breathing helped form a connection with the divine.

Deep breathing also known as diaphragmatic breathing improves oxygen flow in the body significantly which could calm nerves reducing all the stressful thoughts and anxiety symptoms. Deep breathings also helps improve release of endorphins which can naturally elevate mood. Deep breathing is also known to work wonders for your lung health. According to British Lung Foundation, deep breathing can help remove mucus from the lungs after pneumonia and allows more air to circulate. The practice can also provide a workout to your heart muscles which strengthen as a result.

In today's edition of Ancient Wisdom, let's discuss how this age-old practice can transform your overall health.

Deep breathing, often associated with modern wellness and mindfulness practices, has a deep history in ancient cultures. Across civilizations, deep breathing has been recognized for its potential to enhance physical, mental, and spiritual well-being.

"In ancient India deep breathing was known as pranayama. In Sanskrit, 'prana' translates to life force, and 'yama' means control. Ancient Indian yogis believed that deep breathing influences the flow of vital energy throughout the body which improves physical health and mental clarity. Sutra 2.51 of Maharishi Patanjalis Yoga Sutras asserts that the fluctuations of the mind are intimately connected to the breath, and by breathing deeply, we can achieve mastery over the mind," says Dr Hansaji Yogendra, Director of The Yoga Institute.

"In ancient China, the Daoist tradition emphasized the cultivation and balance of Qi, the vital energy that flows through the body. Deep breathing was a fundamental aspect of Daoist practices, with exercises like 'Dao Yin' focusing on the regulation of breath to harmonize and enhance the flow of Qi. While in ancient Greece, deep breathing was known as 'pneuma'. Pneuma was the vital breath or life force and was believed to be the divine breath of the gods. Early Greek philosophers, like Empedocles, associated pneuma with the fundamental elements of air and fire, considering it the animating force that sustained life. In ancient Egypt, the hieroglyphic symbol for breath, 'ankh', represented life and immortality. The Egyptians believed that deep breathing, facilitates a connection with the Divine," adds Dr Hansaji.

"Various indigenous cultures too practiced deep breathing. In Native American ceremonial rituals rhythmic breathing was a means to connect with Nature and the spirit world. Similarly, in the Aboriginal cultures of Australia, the concept of 'Dadirri' uses deep and mindful breathing, as a means of connecting with the land and ancestral wisdom," says the Yoga expert.

"The stream of breath should be always like the stream of relaxed reverse, relax in and relax out. For therapeutic purposes, Kapalbhati and Bhastrika can be sustained at a pace, but otherwise for equalizing our polarities, ha-tha, Chandra-surya, ida-pingala. Slow breathing in, slow breathing out popularly known as Anulom-vilom is of the highest order when it comes to neutralizing diseases, increasing immune response, immunoglobulins, triggering our T cells and B cells and increasing oxygen in our lungs, thus in blood, activating your brain functions and most importantly all the simple breathing exercises also can do the same," says Dr. Mickey Mehta, Global Leading Holistic Coach.

Breathing techniques should not stress you out, it should not create any kind of mental stress, even the temporary pace of your kapalbhati and bhastrika can be focused on releasing your mental and your emotional disturbing thoughts, hurt, pain etc.

"Do not forget the relaxed rhythm of the reverse, that is the way your breath should be. When you do shavasana, youre in and out through the nose consciously while the stomach balloons and retreats as you breathe out is the best way of relaxing as your entire body releases all the stress into gravity," says Dr Mehta.

ALSO READ

Ancient Wisdom Part 16: 5 delicious ways to add amaranth to your diet; know health benefits

Ancient Wisdom Part 17: Lowering cholesterol to weight loss; amazing benefits of chia seeds

Ancient Wisdom Part 18: How to lose weight with jowar; ways to add to diet and wonderful health benefits

Ancient Wisdom Part 19: Weight loss to boosting heart health; many benefits of honey

Ancient Wisdom Part 20: Weight loss to cholesterol control; amazing benefits of cumin seeds

Acharya Dr. Lokesh Muni, Founder President at Ahimsa Vishwa Bharti shares many benefits of deep breathing.

Delving into the ancient wisdom of various cultures and studies reveal that conscious and deep breathing is far more than a biological necessity. It is a gateway to physical, mental, and spiritual transformations. It creates harmony in the body and the mind in miraculous ways by:

Deep breathing taps into divine consciousness, creating a calming effect on the mind. It helps achieve inner peace by liberating the mind from chaos and fostering mindfulness. Conscious breathwork is believed to cleanse and boosts the energy channels, aligning the individual with higher states of consciousness.

Encouraging the acknowledgment and observation of feelings without judgment, deep breathing creates a space for emotional acceptance and understanding. It teaches us to respond thoughtfully rather than react impulsively.

Deep breathing significantly boosts the immune system, relaxes the nervous system, and improves oxygen flow to the lungs, positively impacting the heart. It also helps mitigate conditions like diabetes, asthma, blood pressure, and other respiratory difficulties.

Deep breathing aids in alleviating panic attacks, anxiety, and depression by restoring control over breath, promoting a steady heartbeat, and triggering a relaxation response that decreases the jitters and fosters a happy and calm mind.

A composed and alert mind, facilitated by mindful breathing, creates a physiological state conducive to quick reflexes. It improves cognitive function, enabling quicker decision-making it also aids in anger management, stress management and concentration of mind.

"Regularly practicing breathing exercises improves lung capacity and overall health. Focusing on the breath is a sort of meditation that helps us calm down. It forces us to focus all of our attention on one thing, serving as a mental break from everyday stress. Breathing helps us to instantly ease anxiety and reduce stress," says Shivani Bajwa, Founder, of YogaSutra Holistic Living, India's Leading Functional Medicine Health Clinic.

hile the benefits of deep breathing are numerous, regular practice is essential for noticeable changes in the body. As the mind and body align, a universal language emergesa language spoken by the body, mind, and spirit. This language whispers the profound truth that the key to a well-lived life lies in the ebb and flow of breath.

Shivani Bajwa shares deep breathing exercises to reduce stress.

Exhalation emphasis: Bring the shoulders away from the ears, relax the muscles in the face, and soften the eyelids. You can close your eyes if it's comfortable for you. If not, you can leave them open, relaxing your eyelids. Take a slow, deep inhale through the nose without forcing anything, and exhale slowly with control through the nose. Were trying to lengthen each exhale, making the exhales longer than the inhale.

Continued here:
Ancient Wisdom Part 26: Mind-blowing benefits of deep breathing - Hindustan Times

Read More..

LHC physicists can’t save them all – Symmetry magazine

In 2010, Mike Williams traveled from London to Amsterdam for a physics workshop. Everyone there was abuzz with the possibilitiesand possible drawbacksof machine learning, which Williams had recently proposed incorporating into the LHCb experiment. Williams, now a professor of physics and leader of an experimental group at the Massachusetts Institute of Technology, left the workshop motivated to make it work.

LHCb is one of the four main experiments at the Large Hadron Collider at CERN. Every second, inside the detectors for each of those experiments, proton beams cross 40 million times, generating hundreds of millions of proton collisions, each of which produces an array of particles flying off in different directions. Williams wanted to use machine learning to improve LHCbs trigger system, a set of decision-making algorithms programmed to recognize and save only collisions that display interesting signalsand discard the rest.

Of the 40 million crossings, or events, that happen each second in the ATLAS and CMS detectorsthe two largest particle detectors at the LHCdata from only a few thousand are saved, says Tae Min Hong, an associate professor of physics and astronomy at the University of Pittsburgh and a member of the ATLAS collaboration. Our job in the trigger system is to never throw away anything that could be important, he says.

So why not just save everything? The problem is that its much more data than physicists could everor would ever want tostore.

Williams work after the conference in Amsterdam changed the way the LHCb detector collected data, a shift that has occurred in all the experiments at the LHC. Scientists at the LHC will need to continue this evolution as the particle accelerator is upgraded to collect more data than even the improved trigger systems can possibly handle. When the LHC moves into its new high-luminosity phase, it will reach up to 8 billion collisions per second.

As the environment gets more difficult to deal with, having more powerful trigger algorithms will help us make sure we find things we really want to see, says Michael Kagan, lead staff scientist at the US Department of Energys SLAC National Accelerator Laboratory, and maybe help us look for things we didnt even know we were looking for.

Hong says that, at its simplest, a trigger works like a motion-sensitive light: It stays off until activated by a preprogrammed signal. For a light, that signal could be a person moving through a room or an animal approaching a garden. For triggers, the signal is often an energy threshold or a specific particle or set of particles. If a collision, also called an event, contains that signal, the trigger is activated to save it.

In 2010, Williams wanted to add machine learning to the LHCb trigger in the hopes of expanding the detectors definitions of interesting particle events. But machine-learning algorithms can be unpredictable. They are trained on limited datasets and dont have a humans ability to extrapolate beyond them. As a result, when faced with new information, they make unpredictable decisions.

That unpredictability made many trigger experts wary, Williams says.We dont want the algorithm to say, That looks like [an undiscovered particle like] a dark photon, but its lifetime is too long, so Im going to ignore it, Williams says. That would be a disaster.

Still, Williams was convinced it could work. On the hour-long plane ride home from that conference in Amsterdam, he wrote out a way to give an algorithm set rules to followfor example, that a long lifetime is always interesting. Without that particular fix, an algorithm might only follow that rule up to the longest lifetime it had previously seen. But with this tweak, it would know to keep any longer-lived particle, even if its lifetime exceeded any of those in its training set.

Williams spent the next few months developing software that could implement his algorithm. When he flew to the United States for Christmas, he used the software to train his new algorithm on simulated LHC data. It was a success. It was an absolute work of art, says Vava Gligorov, a research scientist at the National Centre for Scientific Research in France, who worked on the system with Williams.

Updated versions of the algorithm have been running LHCbs main trigger ever since.

Physicists use trigger systems to store data from the types of particle collisions that they know are likely to be interesting. For example, scientists store collisions that produce two Higgs bosons at the same time, called di-Higgs events. Studying such events could enable physicists to map out the potential energy of the associated Higgs field, which could provide hints about the eventual fate of our universe.

Higgses are most often signaled by the appearance of two b quarks. If a proton collision produces a di-Higgs, four b quarks should appear in the detector. A trigger algorithm, then, could be programmed to capture data only if it finds four b quarks at once.

But spotting those four quarks is not as simple as it sounds. The two Higgs are interacting as they move through space, like two water balloons thrown at one another through the air. Just as the droplets of water from colliding balloons continue to move after the balloons have popped, the b quarks continue to move as the particle decays.

If a trigger can see only one spatial area of the event, it may pick up only one or two of the four quarks, letting a di-Higgs go unrecorded. But if the trigger could see more than that, looking at all of them at the same time, that could be huge, says David Miller, an associate professor of physics at the University of Chicago and a member of the ATLAS experiment.

In 2013, Miller started developing a system that would allow triggers to do just that: analyze an entire image at once. He and his colleagues called it the global feature extractor, or gFEX. After nearly a decade of development, gFEX started being integrated into ATLAS this year.

Trigger systems have traditionally had two levels. The first, or level-1, trigger, might contain hundreds or even thousands of signal instructions, winnowing the saved data down to less than 1%. The second, high-level trigger contains more complex instructions, and saves only about 1% of what survived the level-1. Those events that make it through both levels are recorded for physicists to analyze.

For now, at the LHC, machine learning is mostly being used in the high-level triggers. Such triggers could over time get better at identifying common processesbackground events they can ignore in favor of a signal. They could also better identify specific combinations of particles, such as two electrons whose tracks are diverging at a certain angle.

You can feed the machine learning the energies of things and the angles of things and then say, Hey, can you do a better job distinguishing things we dont want from the things we want? Hong says.

Future trigger systems could use machine learning to more precisely identify particles, says Jennifer Ngadiuba, an associate scientist at Fermi National Accelerator Laboratory and a member of the CMS experiment.

Current triggers are programmed to look for individual features of a particle, such as its energy. A more intelligent algorithm could learn all the features of a particle and assign a score to each particle decayfor example, a di-Higgs decaying to four b quarks. A trigger could then simply be programmed to look for that score.

You can imagine having one machine-learning model that does only that, Ngadiuba says. You can maximize the acceptance of the signal and reduce a lot of the background.

Most high-level triggers run on computer processors called central processing units or graphics processing units. CPUs and GPUs can handle complex instructions, but for most experiments, they are not efficient enough to quickly make the millions of decisions needed in a high-level trigger.

At the ATLAS and CMS experiments, scientists use different computer chips called field-programmable gate arrays, or FPGAs. These chips are hard-wired with custom instructions and can make decisions much faster than a more complex processor. The trade-off, though, is that FPGAs have a limited amount of space, and some physicists are unsure whether they can handle more complex machine-learning algorithms.The concern is that the limits of the chips would mean reducing the number of instructions they can provide to a trigger system, potentially leaving interesting physics data unrecorded.

Its a new field of exploration to try to put these algorithms on these nastier architectures, where you have to really think about how much space your algorithm is using, says Melissa Quinnan, a postdoctoral researcher at the University of California, San Diego and a member of the CMS experiment. You have to reprogram it every time you want it to do a different calculation.

Many physicists dont have the skillset needed to program FPGAs. Usually, after a physicist writes code in a computer language like Python, an electrical engineer needs to convertthe code to a hardware description language, which directs switch-flipping on an FPGA. Its time-consuming and expensive, Quinnan says. Abstract hardware languages, such as High-Level Synthesis, or HLS, can facilitate this process, but many physicists dont know how to use them.

So in 2017, Javier Duarte, now an assistant professor of physics at UCSD and a member of the CMS collaboration, began collaborating with other researchers on a tool that directly translates computer language to FPGA code using HLS. The team first posted the tool, called hls4ml, to the software platform GitHub on October 25 that year. Hong is developing a similar platform for the ATLAS experiment. Our goal was really lowering the barrier to entry for a lot of physicists or machine-learning people who arent FPGA experts or electronics experts, Duarte says.

Quinnan, who works in Duartes lab, is using the tool to add to CMS a type of trigger that, rather than searching for known signals of interest, tries to identify any events that seem unusual, an approach known as anomaly detection.

Instead of trying to come up with a new theory and looking for it and not finding it, what if we just cast out a general net and see if we find anything we dont expect? Quinnan says. We can try to figure out what theories could describe what we observe, rather than trying to observe the theories.

The trigger uses a type of machine learning called an auto-encoder. Instead of examining an entire event, an auto-encoder compresses it into a smaller version and, over time, becomes more skilled at compressing typical events. If the auto-encoder comes across an event it has difficulty compressing, it will save it, hinting to physicists that there may be something unique in the data.

The algorithm may be deployed on CMS as early as 2024, Quinnan says, which would make it the experiments first machine learning-based anomaly-detection trigger.

A test run of the system on simulated data identified a potentially novel event that wouldnt have been detected otherwise due to its low energy levels, Duarte says. Some theoretical models of new physics predict such low-energy particle sprays.

Its possible that the trigger is just picking up on noise in the data, Duarte says. But its also possible the system is identifying hints of physics beyond what most triggers have been programmed to look for. Our fear is that were missing out on new physics because we designed the triggers with certain ideas in mind, Duarte says. Maybe that bias has made us miss some new physics.

Illustration by Sandbox Studio, Chicago with Thumy Phan

Physicists are thinking about what their detectors will need after the LHCs next upgrade in 2028. As the beam gets more powerful, the centers of the ATLAS and CMS detectors, right where collisions happen, will generate too much data to ever beam it onto powerful GPUs or CPUs for analyzing. Level-1 triggers, then, will largely still need to function on more efficient FPGAsand they need to probe how particles move at the chaotic heart of the detector.

To better reconstruct these particle tracks, physicist Mia Liu is developing neural networks that can analyze the relationships between points in an image of an event, similar to mapping relationships between people in a social network.She plans to implement this system in CMS in 2028. That impacts our physics program at large, says Liu, an assistant professor at Purdue University. Now we have tracks in the hardware trigger level, and you can do a lot of online reconstruction of the particles.

Even the most advanced trigger systems, though, are still not physicists. And without an understanding of physics, the algorithms can make decisions that conflict with realitysay, saving an event in which a particle seems to move faster than light.

The real worry is its getting it right for reasons you dont know, Miller says. Then when it starts getting it wrong, you dont have the rationale.

To address this, Miller took inspiration from a groundbreaking algorithm that predicts how proteins fold. The system, developed by Googles DeepMind, has a built-in understanding of symmetry that prevents it from predicting shapes that arent possible in nature.

Miller is trying to create trigger algorithms that have a similar understanding of physics, which he calls self-driving triggers. A person should, ideally, be able to understand why a self-driving car decided to turn left at a stop sign. Similarly, Miller says, a self-driving trigger should make physics-based decisions that are understandable to a physicist.

What if these algorithms could tell you what about the data made them think it was worth saving? Miller says. The hope is its not only more efficient but also more trustworthy.

The rest is here:
LHC physicists can't save them all - Symmetry magazine

Read More..

AI is here. Ypsilanti schools weigh integrity, ethics of new technology – MLive.com

YPSILANTI, MI -- As the use of artificial intelligence becomes more and more common, Ypsilanti Community Schools is working to keep up with the technology.

With so much still unclear about the full capabilities of AI, Superintendent Alena Zachery-Ross said she believes its critical for schools to balance the usefulness of the new technology with maintaining academic integrity.

Weve really taken the stance that artificial intelligence is here, and so we need to teach integrity and the ethical considerations that teachers must think about, Zachery-Ross said. We understand that its going to be artificial intelligence and human intelligence interacting together from here on out.

YCS has been slowly rolling out the implementation of AI-powered tools since last summer. One way Zachary-Ross sees AI being used is to assist students in developing writing skills.

By using chatbots like ChatGPT -- an artificial intelligence developed by OpenAI that serves as a language model generating human-like text in a conversational style -- YCS can develop prompts and help students brainstorm ideas for writing exercises, Zachary-Ross said.

One way teachers can stem potential misuse of AI is requiring students to complete written assignments in the classroom -- either by writing on paper or typing in a monitored Google document -- so potential cheating would be easier to catch, Zachary-Ross said.

(Students can) use it for analysis, synthesis and improving their work -- not to generate the work for them, Zachary-Ross said.

In addition to potentially offering new opportunities to personalize student learning, AI could ease some classroom management burdens, such as large-scale data analysis and quickly organizing lesson plans, Zachary-Ross said.

YCS English Learner Department has been at the frontline of AI implementation in the district. The technology can be used to quickly generate instructional materials in several different languages, said teacher Connor Laporte.

We primarily use AI tools to create materials for students, Laporte said. Weve done a little bit of having students use it as well, but were trying to be a little bit slower in talking about how we are rolling that out. You have to be pretty discerning to use (AI).

Serving the roughly 30% of YCS students who can speak a language other than English, the English Learner Department has found multiple ways to bring AI into the classroom, including helping teachers develop multilingual explanations of core concepts discussed in the curriculum -- and save time doing it.

A lot of that time saving allows us to focus more on giving that important feedback that allows students to grow an be aware of their progress and their learning, Laporte said.

Laporte uses an example of a Spanish-speaking intern who improved a vocabulary test by double-checking the translations and using ChatGPT to add more vocabulary words and exercises. Another intern then used ChatGPT to make a French version of the same worksheet.

While convenient, artificial intelligence is not infallible, and native speaking staff members are careful to double-check the work produced through AI tools, Laporte said.

The future is now

AI engines like Google Bard can be used to create bespoken materials for individual students, effectively tailoring classwork for students based on their language proficiency.

AI-generated voice programs also give more options for students to hear multiple dialects of a chosen language. Students will get a chance to differentiate Tanzanian and Ugandan Swahili -- something the monotone, robot-like voice of Google Translate doesnt offer, Laporte said.

We are planning on rolling it out a little more widely, Laporte said. Were still cautious -- last year I feel like everyone was terrified of AI, so we dont want to just jump right into it.

Since the beginning of 2023, fifth-grade teacher Melanie Eccles has been implementing the Roadmaps digital education platform to digitally organize her lesson plans.

Developed by the University of Michigan College of Engineerings Center for Digital Curricula, Roadmaps allows Eccles to monitor students complete work in the same program. The platform uses AI-technology to automate the process of sharing information amongst students and other teachers.

(Roadmaps) has helped me both incorporate digital learning into the students curriculum and train them on how to use the curriculum in a way that isnt just browsing the internet, Eccles said.

Sydney Fortson, an 11-year-old student of Eccles social studies class, likes that the collaboration-based Roadmaps allows her to edit their own work and not just rely on a teacher.

I like how everything is in one place (with Roadmaps), Sydney said. I wish there were a few less tabs, but I like how it gives me choices on how I can learn.

Balance is critical

Whether or not students will use AI in their education is not a question of if, but when, Zachary-Ross said. Because of this, YCS is changing how teachers approach crafting their assignments in the first place.

Teachers are asking students to do more rigorous tasks -- things that do require more critical thinking and analysis, Zachary-Ross said. When we get to that level, thats something that a bot cant contribute to.

YCS staff are preparing for a future in which methods like group projects, hands-on assignments and asking students to explain concepts verbally are the norm in lieu of relying on written assignments to showcase student aptitude.

(Students) are having formative instruction where theyre growing and not just getting a final paper or final, simple assignment that can be put into an AI bot, Zachary-Ross said. We have to move away from that, because thats not higher-level thinking anyway. We really want to get students to analyze and be critical workers.

Though her district is open to the AI-powered future, Zachary-Ross said it will be important to stay careful and cautious when dealing with the technology, and for school districts to learn and grow from each other in order to balance utility with integrity.

Students need to understand that there have to be ethical considerations, Zachary-Ross said. That balance is critical for any district or educator thinking about adopting generative AI into their work.

If you would like more reporting like this delivered free to your inbox, click here and signup for our weekly newsletter: Michigan Schools.

Want more Ann Arbor-area news? Bookmark the local Ann Arbor news page, the Ypsilanti-area news page or sign up for the free 3@3 Ann Arbor daily newsletter.

Go here to see the original:

AI is here. Ypsilanti schools weigh integrity, ethics of new technology - MLive.com

Read More..

Icebergs are melting fast. This AI can track them 10000 times faster … – Space.com

Scientists are turning to artificial intelligence to quickly spot giant icebergs in satellite images with the goal of monitoring their shrinkage over time. And unlike the conventional iceberg-tracking approach, which takes a human a few minutes to outline just one of these structures in an image, AI accomplished the same task in less than 0.01 seconds. That's 10,000 times faster.

"It is crucial to locate icebergs and monitor their extent, to quantify how much meltwater they release into the ocean," Anne Braakmann-Folgmann, lead author of a study on the results and a scientist at the University of Leeds in the U.K., said in a statement.

In late October, the British Antarctic Survey reported that massive ice sheets covering Antarctica will melt at an accelerated rate for the rest of the century, and contribute inevitably to sea level rise around the globe in the coming decades. Last year, one of the biggest icebergs known to scientists A68a was more than 100 miles long and 30 miles wide thawed in the South Atlantic Ocean after drifting for five years from its home in the Antarctic Peninsula, where it had broken apart in 2017.

Related:

Along with dumping 1 trillion tons of fresh water into the ocean, the melting iceberg also pumped nutrients into its environment, which will radically alter the local ecosystem for years to come, scientists have said. It's still unclear whether this change will have a positive or negative effect on the marine food chain.

Scientists monitored A68a's travels and shrinkage using images from satellites. Accurately identifying the iceberg, crucial to monitor changes to its size and shape over the years, is not an easy task, as the icebergs, sea ice and clouds are all white. Plus, although analyzing one satellite image for icebergs takes only a few minutes to complete, the time quickly adds up when thousands of images are waiting for their turn.

"In addition, the Antarctic coastline may resemble icebergs in the satellite images, so standard segmentation algorithms often select the coast too instead of just the actual iceberg," said Braakmann-Folgmann.

So to reduce this time-consuming and laborious process, researchers have, for the first time, trained a neural network to do the job.

The study team trained the AI to spot large icebergs by using images from the European Space Agency's Sentinel-1 satellite, whose radar eyes can capture Earth's surface regardless of cloud cover or lack of light.

Except for missing a few parts of icebergs bigger than the examples the AI was trained on a solvable problem scientists found the system managed to detect satellite image icebergs with 99 percent accuracy. This included correctly identifying seven icebergs ranging in size from 54 square kilometers (approximately the size of the city of Bern in Switzerland( to 1052 square kilometers (as large as Hong Kong.)

"This study shows that machine learning will enable scientists to monitor remote and inaccessible parts of the world in almost real-time," study co-author Andrew Shepherd, who is a professor at the Northumbria University in England, said in the statement.

The AI tool also didn't make the same mistakes as other more conventional automated approaches, such as the error of misconstruing individual bits of ice as one collective iceberg, the researchers say.

"Being able to map iceberg extent automatically with enhanced speed and accuracy will enable us to observe changes in iceberg area for several giant icebergs more easily and paves the way for an operational application," said Braakmann-Folgmann.

This research is described in a paper published Thursday (Nov. 9) in the journal The Cryosphere.

See the original post here:

Icebergs are melting fast. This AI can track them 10000 times faster ... - Space.com

Read More..

Digitizing Healthcare: Can AI Augment Empathy and Compassion in … – MedCity News

With the advent of the latest technologies and software, including generative artificial intelligence (AI), virtual reality, ChatGPT, and others, organizations are racing to find purpose and use for these new tools in fear of losing relevance in the marketplace. In most industries, incorporating new and emerging technologies is seen as innovative, impressive, and ambitious. In healthcare, an industry that inherently holds the lives and touchpoints of care of many populations across the nation in their hands, moving towards digitizing healthcare inherently demands greater discernment around true impact, quality, and cost.

In the case of healthcare AI, we have seen its arrival signal developments in interactive and customized patient experiences, facilitated or eliminated administrative tasks in hospital or provider workflows, and improved access to healthcare. Yet there is still much work to be done. Some AI tools arent yet equipped to source from up-to-date and relevant materials, require human editors or handlers to double check the results and work, and should be optimized to recognize and appropriately address human emotion that consumers need. When it comes to addressing evolving patient and healthcare gaps, we are left to question, can AI help augment empathy and compassion in healthcare, or will it eventually crack under the pressure?

What is empathy and compassion?

Empathy and compassion go hand-in-hand. Empathy is simply defined as feeling for someone or being aware of others emotions and attempting to understand how they feel. Compassion is defined as feeling for someone and having the desire to help, an emotional response to empathy that evokes a desire to act. Within healthcare, compassion and empathy can play a critical part in improving patient outcomes and furthering patient care quality, yet the industry still struggles to find ways in which to foster and support the use of compassionate and empathetic care across the industry.

Evidence increasingly validates that exhibiting empathy in a healthcare setting, including providers, professionals, social care workers, etc., has shown results of higher satisfaction levels, and better health outcomes for patients. Compassionate care is also highly regarded by patients and can help providers determine appropriate care plans that focus on the unique patients needs based on their care story. Compassionate care can also strengthen physician-patient relationships as trust is established throughout care. Patients value compassionate and empathic concern as much as, if not more than, technical competence, when choosing a physician, yet empathy and compassion among healthcare professionals is sometimes seen to decrease over time, especially during training and clinical practice.

The current state of empathy and compassion in healthcare

As we continue to move towards care models that express or emphasize the attractiveness of value-based care, some argue physicians are unable to empathize with every patient genuinely and effectively without feeling emotionally drained. Compassion fatigue, highlighted as burnout and emotional exhaustion among healthcare professionals, is another deterrent to improving care as this phenomenon can lead to reduction in empathy, decreased patient and employee satisfaction, poorer clinical judgment, and other emotional turmoil. Overall, healthcare professionals today are finding it difficult to properly provide compassionate care under modern time and labor constraints, affecting both provider and patient satisfaction and outcomes, and leaving both feeling unsupported within the care continuum.

In combatting time and labor constraints, AI has proven to simplify workloads, maximize time, and offload repetitive or organizational tasks from an already over-burdened workforce. In regard to emotion, empathy or compassion, AI has also progressed to be able to recognize and respond to emotional distress. Experts debate AI cannot replace human empathy, specifically in a healthcare setting and with empathy being key to the successful treatment of patients, yet a recent JAMA Internal Medicinereport found ChatGPTs patient-provider communication skills were rated higher than that of their physician counterparts, including on the empathetic scale. While machines currently cannot feel a need or desire to help, as compassion requires, AI can replicate questions and responses that mimic an empathetic interaction. While we might question how AI is better able to provide these interactions to patients than providers, we must recognize AI chatbots are not better at empathy, AI is just not under the same time pressures as human clinicians.

How AI can help augment empathy and compassion

There is an inherent opportunity that emerges for AI to be used to help physicians provide better, compassionate, and empathetic care. Whether we use AI in training or to help free up time and space for healthcare workers to provide better care, we need to continue exploring effective AI use across the care continuum to help every member patient and provider included. The healthcare industry should prioritize a patients experience with compassion and empathy within healthcare rather than just looking at the outcomes. Through this, when using AI tools to augment and improve compassionate and empathetic care, we can ensure high standards are met with each interaction. The measurement of experienced or perceived empathy and compassion can easily be deprioritized for return on investment measured in hard dollars. Yet, there is most assuredly a return on investment when an individual stays engaged in their health journey due to compassionate interaction.

Our human impact on the consumer experience needs to be at the forefront of our care as we look to improve performance. Health teams feel a sense of responsibility towards their impact on a humans lived experience. This is a foundational element to a better company culture where healthcare systems and organizations are better able to impact patient outcomes and assure the intersection of AI and empathy is beneficial for all.

In its current state, AI can be relied on to improve efficiency, and free time/emotional labor for healthcare professionals to focus more fully on the human side of care fostering trust and relationship, and properly engaging with patients. Yet to conclude that AI and artificial empathy will evolve enough to completely replace physicians/healthcare workers, or the human side of healthcare is to misrepresent the issues at hand and the possible solutions. To digitize healthcare and lean on the emerging technologies is to find the opportunity within the relationship between machine and human to augment humans ability to be human to empathize and provide the compassionate touch to care.

Photo: ipopba, Getty Images

See the rest here:

Digitizing Healthcare: Can AI Augment Empathy and Compassion in ... - MedCity News

Read More..

Generative AI Companies Love This Stock. Could It Be a Winner For … – The Motley Fool

Investors have been falling over themselves to get exposure to artificial intelligence (AI) stocks this year.

Since ChatGPT's launch nearly a year ago, investors have been convinced that AI and generative AI, specifically, will be the next transformative technology, and excitement about the possibilities is a major reason the Nasdaq Composite has soared this year, including AI stocks such as Nvidia and Microsoft.

However, investing in stocks selling generative AI capabilities like Microsoft or selling the building blocks necessary to run the advanced computing they require, like Nvidia, isn't the only way to get exposure to the fast-growing field.

There's another picks-and-shovels approach to getting exposure to generative AI, which is buying the stocks that provide the technology that these rapidly growing AI companies need. One company that is already serving generative AI customers and well positioned to benefit from their growth is Amplitude (AMPL 0.20%), a cloud software company that helps companies learn how their customers use their digital products and how they can improve. For example, Amplitude helped Peloton figure out that social interaction was key to getting loyalty from its members.

Image source: Getty Images.

Because Amplitude has focused on digital products, it has long been popular with tech start-ups that are eager to see how customers experience their products and improve their user interface, and Amplitude is now seeing a boom in demand from generative AI start-ups.

Two new generative AI companies just became Amplitude customers in its recently reported third quarter. Those are Midjourney, an AI image generation company similar to Stable Diffusion or OpenAI's DALL-E, and Character.ai, a large language model chatbot similar to ChatGPT.

Midjourney is using Amplitude to understand free-to-paid conversions, see how demographics relate to users, and to A/B test changes to its user interface. Character.ai, meanwhile, is using Amplitude's analytics and experimental products to allow them to better understand the user experience and improve it.

On the earnings call, Amplitude CEO Spenser Skates said, "Amplitude is the platform of choice for some of the biggest, brightest, and best names in generative AI, helping them guide their businesses in ways that our competitors cannot match."

Skates also saw Amplitude playing a key role for AI companies because they are competing, in large part, on user experience, which makes the product data that Amplitude provides so valuable. He also said the demand from AI companies was part of a larger trend, adding, "I would compare AI to actually previous waves of technological innovation. We've seen stuff like VR, crypto, mobile, SaaS, all of the new companies that those categories created ended up becoming day one Amplitude customers from the very start, starting out with a small -- growing with us -- you know, us growing with them over time as they continue to scale."

Amplitude's ability to grow with its customers is key here.

Amplitude stock sold off following its third-quarter earnings report on Tuesday night, even as the company topped estimates in the report and raised its guidance.

Like a lot of cloud software companies, Amplitude is still seeing some macro-related challenges as many of its customers have grown more cautious, and it's seen an uptick in churn. That's not surprising, given the recent layoffs in the tech sector and broader fears of a recession.

In an interview I had with Skates, he expressed optimism about faster growth returning by the second half of next year, but the company is also making progress on the bottom line. It's on track for an adjusted operating income profit in the second half of this year, and it posted $7.5 million in free cash flow in the third quarter, giving it a free cash flow margin of more than 10% in the quarter.

Still, the company's strength with tech start-ups like Midjourney and Character.ai could be its biggest strength as the artificial intelligence industry is expected to explode. Digital product usage should only increase with the proliferation of generative AI tools, and Amplitude is well positioned as a leader in product analytics.

With a market cap just north of $1 billion, the company has a lot of upside potential if it can take advantage of the generative AI wave.

Read more:

Generative AI Companies Love This Stock. Could It Be a Winner For ... - The Motley Fool

Read More..

Address the Issue of AI-Generated Evidence | New Jersey Law Journal – Law.com

The New York Assembly now has before it A8110, a bill that precludes admissibility of evidence in criminal proceedings that is created in whole or in part by artificial intelligence, unless the evidence is substantially supported by independent and admissible evidence and the proponent of the evidence establishes the reliability and accuracy of the specific use of the artificial intelligence in creating the evidence. The bill further precludes evidence that is processed unless reliability and accuracy of the particular uses of the AI is established. The bill then proceeds to define how to determine these components. It further address amending the rules of evidence as applicable to civil proceedings, requiring AI-created and AI-processed evidence to be subject to the same standards as for criminal proceedings.

The current rules of evidence provide means for authenticating other technologically created evidence, such as photographs. For example, the North Dakota Supreme Court in a 2015 case admitted into evidence a photograph that was cropped and resized, and allowed such a manipulated photograph based on expert testimony of someone with knowledge of how the manipulation (i.e., altering) of the image was done. In another case, the Connecticut Supreme Court in 2004 permitted a manipulated photograph of a bite mark on a victim, corroborated by the testimony of the witness who performed the enhancement.

See the original post:

Address the Issue of AI-Generated Evidence | New Jersey Law Journal - Law.com

Read More..

Google in talks to invest hundreds of millions into AI startup … – CTech

Alphabets Google is in talks to invest in Character.AI, an artificial intelligence chatbot startup. Character.AI was founded by Noam Shazeer and Daniel De Freitas, two former employees at Google Brain, and the tech giant is expected to invest hundreds of millions of dollars as Character.AI seeks to train models to keep up with user demands, two sources briefed on the matter told Reuters.

The investment, which could be structured as convertible notes, according to a third source, will deepen the existing partnership Character.AI already has with Google, in which it uses Google's cloud services and Tensor Processing Units (TPUs) to train models.

Character.AI allows people to chat with virtual versions of celebrities like Billie Eilish or anime characters, while creating their own chatbots and AI assistants. It is free to use, but offers subscription model that charges $9.99 a month for users who want to skip the virtual line to access a chatbot.

According to data from Similarweb, Character.AI's chatbots, with various roles and tones to choose from, have appealed to users ages 18 to 24, who contributed about 60% of its website traffic. The demographic is helping the company position itself as the purveyor of more fun personal AI companions, compared to other AI chatbots from OpenAI's ChatGPT and Google's Bard.

The company previously said its website had attracted 100 million monthly visits in the first six months since its launch.

The story, broken exclusively by Reuters, comes as the startup is also in talks to raise equity funding from venture capital investors, which could value the company at over $5 billion, sources said. In March, it raised $150 million in a funding round led by Andreessen Horowitz at $1 billion valuation.

The rest is here:

Google in talks to invest hundreds of millions into AI startup ... - CTech

Read More..

The best AI tools to power your academic research – Euronews

The future of academia is likely to be transformed by AI language models such as ChatGPT. Here are some other tools worth knowing about.

"ChatGPT will redefine the future of academic research. But most academics don't know how to use it intelligently," Mushtaq Bilal, a postdoctoral researcher at the University of Southern Denmark, recently tweeted.

Academia and artificial intelligence (AI) are becoming increasingly intertwined, and as AI continues to advance, it is likely that academics will continue to either embrace its potential or voice concerns about its risks.

"There are two camps in academia. The first is the early adopters of artificial intelligence, and the second is the professors and academics who think AI corrupts academic integrity," Bilal told Euronews Next.

He places himself firmly in the first camp.

The Pakistani-born and Denmark-based professor believes that if used thoughtfully, AI language models could help democratise education and even give way to more knowledge.

Many experts have pointed out that the accuracy and quality of the output produced by language models such as ChatGPT are not trustworthy. The generated text can sometimes be biased, limited or inaccurate.

But Bilal says that understanding those limitations, paired with the right approach, can make language models do a lot of quality labour for you, notably for academia.

To create an academia-worthy structure, Bilal says it is fundamental to master incremental prompting, a technique traditionally used in behavioural therapy and special education.

It involves breaking down complex tasks into smaller, more manageable steps and providing prompts or cues to help the individual complete each one successfully. The prompts then gradually become more complicated.

In behavioural therapy, incremental prompting allows individuals to build their sense of confidence. In language models, it allows for way more sophisticated answers.

In a Twitter thread, Bilal showed how he managed to get ChatGPT to provide a brilliant outline for a journal article using incremental prompting.

In his demonstration, Bilal started by asking ChatGPT about specific concepts relevant to his work, then about authors and their ideas, guiding the AI-driven chatbot through the contextual knowledge pertinent to his essay.

Now that ChatGPT has a fair idea about my project, I ask it to create an outline for a journal article, he explained, before declaring the results he obtained would likely save him 20 hours of labour.

If I just wrote a paragraph for every point in the outline, I'd have a decent first draft of my article.

Incremental prompting also allows ChatGPT and other AI models to help when it comes to "making education more democratic," Bilal said.

Some people have the luxury of discussing with Harvard or Oxford professors potential academic outlines or angles for scientific papers, "but not everyone does," he explained.

"If I were in Pakistan, I would not have access to Harvard professors but I would still need to brainstorm ideas. So instead, I could use AI apps to have an intelligent conversation and help me formulate my research".

Bilal recently made ChatGPT think and talk like a Stanford professor. Then, to fact-check how authentic the output was, he asked the same questions to a real-life Stanford professor. The results were astonishing.

ChatGPT is only one of the many AI-powered apps you can use for academic writing, or to mimic conversations with renowned academics.

Here are other AI-driven software to help your academic efforts, handpicked by Bilal.

In Bilals own words: "If ChatGPT and Google Scholar got married, their child would be Consensus an AI-powered search engine".

Consensus looks like most search engines but what sets it apart is that you ask Yes/No questions, to which it provides answers with the consensus of the academic community.

Users can also ask Consensus about the relationship between concepts and about somethings cause and effect. For example: Does immigration improve the economy?

Consensus would reply to that question by stating that most studies have found that immigration generally improves the economy, providing a list of the academic papers it used to arrive at the consensus, and ultimately sharing the summaries of the top articles it analysed.

The AI-powered search engine is only equipped to respond to six topics: economics, sleep, social policy, medicine, and mental health and health supplements.

Elicit, "the AI research assistant" according to its founders, also uses language models to answer questions. Still, its knowledge is solely based on research, enabling "intelligent conversations" and brainstorming with a very knowledgeable and verified source.

The software can also find relevant papers without perfect keyword matches, summarise them and extract key information.

Although language models like ChatGPT are not designed to intentionally deceive, it has been proven they can generate text that is not based on factual information, and include fake citations to papers that don't exist.

But there is an AI-powered app that gives you real citations to actually published papers - Scite.

"This is one of my favourite ones to improve workflows," said Bilal.

Similar to Elicit, upon being asked a question, Scite delivers answers with a detailed list of all the papers cited in the response.

"Also, if I make a claim and that claim has been refuted or corroborated by various people or various journals, Scite gives me the exact number. So this is really very, very powerful".

"If I were to teach any seminar on writing, I would teach how to use this app".

"Research Rabbit is an incredible tool that FAST-TRACKS your research. Best part: it's FREE. But most academics don't know about it,"tweeted Bilal.

Called by its founders "the Spotify of research," Research Rabbit allows adding academic papers to "collections".

These collections allow the software to learn about the users interests, prompting new relevant recommendations.

Research Rabbit also allows visualising the scholarly network of papers and co-authorships in graphs, so that users can follow the work of a single topic or author and dive deeper into their research.

ChatPDF is an AI-powered app that makes reading and analysing journal articles easier and faster.

"It's like ChatGPT, but for research papers," said Bilal.

Users start by uploading the research paper PDF into the AI software and then start asking it questions.

The app then prepares a short summary of the paper and provides the user with examples of questions that it could answer based on the full article.

The development of AI will be as fundamental "as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone," wrote Bill Gates in the latest post on his personal blog, titled The Age of AI Has Begun.

"Computers havent had the effect on education that many of us in the industry have hoped," he wrote.

"But I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionising the way people teach and learn".

Visit link:

The best AI tools to power your academic research - Euronews

Read More..

Why you Shouldn’t Trust AI to Identify your Mushrooms – Medium

Image by Egor Kamelev on Pexels

Weve found the weak point of AI mushrooms!

A plethora of applications designed to help foragers identify wild mushrooms have surfaced in app marketplace such as App Store and Play Store. These applications, which make use of AI, assert that they can recognize different kinds of mushrooms from a smartphone photo alone.

Although AI has advanced quickly in several fields recently, it appears to be less effective at identifying mushrooms. According to recent studies, these apps have a proven error rate of at least 50%, indicating that they regularly make mistakes.

AI image recognition systems are not reliable for identifying mushrooms because you simply cant tell if a mushroom is edible by just looking at it.

The yellow stainer mushroom is non-edible and it looks just like an edible horse mushroom from above and the side. You need to pick it up and scratch it or smell it to actually tell what it is, explains Colin Davidson, a mushroom forager with a PhD in microbiology.

Indeed, there are many edible mushrooms that resemble toxic ones. To recognize them, you must examine them from several perspectives in order to spot characteristics like rings and gills.

Furthermore, mushrooms are highly polymorphic, often changing both in color and shape. External factors such as weather conditions can also influence their appearance; for instance, rain can discolor the cap of mushrooms.

All of this is currently beyond the reach of image recognition algorithms, which lack enough data to properly identify mushrooms. And that explains why these apps make mistakes half the time.

Relying on these apps can be quite dangerous, as evidenced by the situation in France. Of the 30,000 types of mushrooms found in France, only 100 are edible and twenty are potentially fatal. In 2021, there were approximately 1,250 intoxications, and in 2022, the number rose to almost 2,000 intoxications, including around 40 severe cases and 2 fatalities.

See more here:

Why you Shouldn't Trust AI to Identify your Mushrooms - Medium

Read More..