Page 4«..3456..1020..»

Re-engineering the way the construction business works – Global Construction Review

In Australia, a voluntary alliance of tier 1 contractors, government agencies, specialist subcontractors, engineering firms, cost consultants, architects, and material manufacturers representing the whole construction supply chain are working together to find ways to drive carbon out of the construction process.

Initiated in 2022, it has 160 members, and its growing.

They think of themselves as a do tank, not a think tank, because Meclas 11 working groups ask what each bit of the sector can start doing this afternoon, tomorrow, and the next day to build momentum for change.

They know construction is a complex market system with carbon entrenched, and that they wont be able to build a low-carbon market unless everyone works together on it.

It has to be done now, they say. Theres no time for incremental change.

When that new market is functioning with as least carbon as possible, Mecla disappears.

In March, CIOBs head of environmental sustainability, Amanda Williams, interviewed Meclas chair, Hudson Worsley. What follows are some highlights of that conversation. More will follow on the 21CC podcast in June.

AW: So youve got this big goal of cutting embodied carbon in the built environment but, in your mind, what does the roadmap look like to getting there?

HW: Okay, weve got these 160 organisations. Were building critical mass at the tier 1 level of the Australian market, and with government agencies like Transport for New South Wales and Infrastructure Victoria.

The next step is tier 2 contractors and suppliers because outside the bubble of the leadership group, many industry participants still dont have embodied carbon on their radar. So, getting them conversant with the concepts and opportunities, understanding that government procurement is going to require this and that it will filter down to private procurement, thats key.

The next one is advocacy on government to keep the pressure up. We want headline projects like the Brisbane 2032 Summer Olympics to be carbon-zero, or low embodied carbon where we can. We want government to understand that their procurement role is an absolutely vital lever for this change.

We want to maintain pressure on suppliers. Its really hard, particularly steel and cement. We need to keep that pressure up. No resting on your laurels that youve done a good job so far, or youve made a breakthrough. Until were at zero, weve got to keep pushing.

We also want to influence designers and architects, so its not just about material substitution, its about better design. Its quite common in Australia to double the size of the girder, or the pillar, or the pile, because of risk aversion. Until climate came along, that was the safe thing to do, but now that carbon is a measure of success, we have to look at design as a big opportunity.

Finally, its supporting the ecosystem. So when theres a legal issue over how to specify a carbon clause, we get the relevant people onto it. When theres a change to the curriculum for architects at university level, lets see what we can do to help that.

Were penning the roadmap as we drive along, its not predefined. Opportunities come as we move; thats what rapid change looks like, and how it has to be.

AW: How close are we to low-emissions concrete, for example, and how important is cracking those individual material types?

Weve been using cement since the Romans, and its the most ubiquitous material, so we cant exactly go cold turkey.

Thereve been big strides. Some of the global cement makers can offer an off-the-shelf, 60%-reduced embodied carbon product. And thats fantastic: 60% is a big saving.

But even though its a saving, its still additional emissions because were building new things that werent there before, so there is a way to go.

There are alternatives, like geopolymer cement using substitute cementitious materials. The use case for those where are they best suited, where are they not thats still being hammered out. One of the barriers to entry can be very prescriptive specifications, rather than a performance-based specification.

But essentially, were still super-heating lime to develop clinker as the core component of cement and concrete. That last 40% is still hard, particularly where really high-performance concrete is required.

But we still need to encourage take-up of the lower-emissions alternatives already there. Weve heard that tier 2 contractors just flatly say theyre not taking the risk. If the foundations and floor of the warehouse fail, end of company. At the same time, tier 1 contractors say theyve been doing it for a couple of years and its absolutely fine. We want to bring those players together to talk and find a way through.

So its not just Thou shalt cut emissions, its hey, this is where it was done, this is how it worked, this is what was overcome.

We call ourselves a do tank rather than a think tank, and thats the doing: bringing those whove done it together to show those who are yet to do it that this is the way forward, so lets get on with it.

Originally posted here:

Re-engineering the way the construction business works - Global Construction Review

Read More..

Social media revives chip art AKA silicon doodles and keeps its history alive – NPR

Kenton Smith designs circuit boards and has long been fascinated by computers. He was examining chips a few years ago when he found one smiling back up at him. Courtesy of Kenton Smith hide caption

Kenton Smith designs circuit boards and has long been fascinated by computers. He was examining chips a few years ago when he found one smiling back up at him.

An owl. A sharky looking bullet. The Hindu deity Ganesh. The Yin and Yang sign. All painstakingly selected and etched onto a microchip that measures about an inch square. Each microscopic silicon doodle was the handiwork of engineers at Qualcomm Incorporated, a San Diego-based company that creates wireless technology-related products and services. The engineers slipped the drawings into Qualcomm's Q1650 data decoder with care not to disturb any of the chip's functions.

They were purposeless etchings, never meant to be uncovered.

These doodles, also known as silicon art, chip graffiti or chip art, and dozens others like it, are remnants of tech historyfrom Silicon Valley's infancy to the early 2000swhen innovation was rapid fire and the tech still had a very human touch. Engineers would add the sketches to their microchip designs in the techie equivalent of signing their artwork. They'd etch them on chips that may end up in your cellphone, laptop or calculator. They spent hours crafting them, even though they were frowned upon by those in the C Suite.

The existence of these doodles came to light decades ago, but social media is discovering them anew. And there is now a small but determined group of online hobbyists working to keep that history alive. They are still cataloguing the miniscule drawings many smaller than the width of a human hair and can't be seen without a microscope.

These devotees post glossy videos of themselves shucking chips like oysters to see their iridescent insides and the itsy bitsy sketches that may be hidden on them. And they are eagerly saving them from the scrap heap.

Richard Kerr spent 15 years at Qualcomm starting in the mid 1980s.

His first doodle was inspired by his then-4-year-old son, Jesse Kerr, who would ask to help with his chip designs.

"Well, he was into trucks at the time, so I designed a little '56 Chevy pickup truck," Kerr said.

Kerr only recently learned that others had discovered his decades-old doodles, from a friend who found a video on YouTube of the doodle packed Q1650 data decoder. "I was just blown away that somebody actually found it. And then it was like, you know, total memory lane," he said.

"Mine is the shark bullet. Because my son (Jesse) was a body boarder at that time. And so he decided that that was cool because he saw it on somebody's wetsuit," Kerr said.

Many of the doodles came from engineers who weren't doing it for an audience.

"We did it for ourselves," said Willy McAllister, a retired electrical engineer who worked for more than a decade at Hewlett-Packard (HP) and helped craft a chip with the sleek image of a cheetah on it. "Nobody ever expected it to be cracked open 10 years later and marveled at. That was never the point."

The cheetah was picked as a visual representation for an HP project code named after the world's fastest land animal.

McAllister's wife, Monica, sketched the cheetah, which was transferred to the chip. One of the big cat's spots is about 4 microns across far thinner than a sheet of paper.

So how did this tiny art make its debut? The late research scientist Michael W. Davidson who ran the optical microscopy laboratory at Florida State University for about decade until his retirement (and death) in 2015 had a microscope he used to photograph integrated circuits. One day in the late '90s while scanning a circuit, he found Waldo. Or at least, the etched outline of Waldo's head.

"Waldo is the first Silicon Creature that we discovered," reads the Silicon Zoo website. "And this led to an exhaustive search for more creatures and construction of the Silicon Zoo gallery."

Davidson built the Silicon Zoo website to record his findings and he was tireless in his search for this art.

"At one point, I would say in '99 it was that the word was out in Tallahassee that we were collecting (chips)," said Eric Clark, who helped Davidson build the site. "We had a mountain of just computer parts and chips."

The site has catalogued dozens of doodles, many that wink to the function of the integrated circuit, like an elephant on a memory chip, or a can o' worms nodding to the many problems a designer faced.

But there are still more out there.

That's where people like Kenton Smith come in. Smith designs circuit boards and has long been fascinated by computers. He was examining chips a few years ago when he found one smiling back up at him.

"I was just opening up a bunch of different types of chips," he said. "And eventually I came to one and it had a smiley face on it. And to find that was one of the best feelings I've ever had in the world."

The discovery led him deep into the world of silicon scribbles, and he now buys bulk chip lots on eBay and regularly posts videos showcasing his finds.

"One of the the driving forces, I think, to buying so much stuff on eBay is that there's always this feeling that if I don't get it, I may never see that chip again," Smith said. "And it's going to be recycled and we'll lose parts of our history."

Smith posts about his finds across various social media, using his @Evilmonkeyzdesignz accounts that have hundreds of thousands of followers among them.

This is a hobby that takes dedication. First you have to find the chips. Then you have to snip them from their casings. Sometimes the chips shatter into pieces. Sometimes acid is needed to eat away at decades old adhesive.

"I go to flea markets almost every week," said the man behind the @CPU_Duke accounts. "I'm looking at boards and I get a feeling for this could be interesting, this is an interesting brand. Could have a doodle on it. An Easter egg." He didn't want to use his real name for privacy reasons.

The doodles represent an amusing a collision of technology and art.

Albert Folch, a professor at the University of Washington's bioengineering department, is an expert in the field. He has long merged science and art in a program he calls BAIT (Bringing Art Into Technology). The walls in his office are covered in colorful microfluid artwork, which blend scientific photography with artistic flourishes.

For Folch, the micro doodles are a joyful distraction.

"I think it's a lot of fun," he said. "When I was designing chips myself, I never did it. But I think it's something that if it had occurred to me, I probably would have done it, just for fun."

Visit link:

Social media revives chip art AKA silicon doodles and keeps its history alive - NPR

Read More..

Kat Meme Coin: The new crypto sensation with a Vitalik Buterin endorsement – crypto.news

Disclosure: This article does not represent investment advice. The content and materials featured on this page are for educational purposes only.

In the rapidly evolving world of cryptocurrencies, a new player has entered the market, capturing both attention and intrigue.

Dubbed Kat Meme Coin, this new meme-based cryptocurrency has become a topic of considerable discussion, especially following an unexpected endorsement from Ethereum co-founder Vitalik Buterin.

Kat Meme Coin emerged from the burgeoning trend of cryptocurrencies that leverage popular internet culture to gain rapid user adoption. Drawing inspiration from various internet memes featuring cats, the creators of Kat Meme Coin have cleverly combined the viral nature of memes with the robust, speculative dynamics of the crypto market. The result is a digital currency that not only appeals to crypto enthusiasts but also to a broader audience that enjoys internet humor.

The cryptocurrency community was taken by surprise when Vitalik Buterin, a well-known figure in the blockchain space, announced his investment in Kat Meme Coin. Buterin, whose endorsements are few and far between, praised the project for its innovative approach to community engagement and its potential to bring new users into the cryptocurrency ecosystem.

I am always excited to see new projects that challenge the conventional and bring about a wave of freshness to the blockchain space, Buterin said in a recent interview. Kat Meme Coin, with its unique blend of humor and serious technology, presents an intriguing proposition.

Despite its whimsical origins, Kat Meme Coin is built on a solid technological foundation. It operates on a decentralized blockchain platform that ensures security and transparency. The developers have implemented a unique deflationary mechanism where a small percentage of coins are burned with every transaction, potentially increasing the value of the remaining coins over time.

The future of Kat Meme Coin looks promising. With its community-driven approach, the project has outlined several upcoming initiatives, including partnerships with online content creators, merchandise stores, and possibly even adoption by e-commerce platforms to use Kat Meme as a legitimate form of payment.

Since its launch, Kat Meme Coin has seen a fluctuating yet positive market response. The coin experienced a significant surge in value following Buterins endorsement, demonstrating the impact of high-profile support in the cryptocurrency market.

However, the project is not without its challenges. Critics argue that the long-term viability of meme coins is uncertain, and their value can be highly volatile. The developers of Kat Meme Coin will need to maintain the momentum and ensure that the project continues to evolve to sustain interest and value.

Disclosure: This content is provided by a third party. crypto.news does not endorse any product mentioned on this page. Users must do their own research before taking any actions related to the company.

Read the rest here:

Kat Meme Coin: The new crypto sensation with a Vitalik Buterin endorsement - crypto.news

Read More..

Arkham’s top 5 doxed crypto hodlers own $3.5B, but 35% is untouchable – Cointelegraph

The top five identified crypto whales with publicly known wallet addresses hold around $3.5 billion in crypto, according to a dashboard from blockchain intelligence firm Arkham.

However, due to lost passwords and private keys, a large chunk of it is inaccessible.

On April 15, the on-chain intelligence platform announced the updated dashboard revealing the holdings of some of the worlds on-chain-verified richest crypto holders and whales, including Trons Justin Sun and Ethereums Vitalik Buterin.

The top five crypto holders on its list currently own an aggregate of $3.47 billion in digital assets, according to the data. However, around 35% or $1.21 billion of that total has been flagged as inaccessible.

Justin Sun tops the list with a reported $1.06 billion in his crypto wallet. The Tron network founders wallet is heavy on his own stablecoin, Decentralized USD (USDD), with $275 million in holdings, followed by the networks native token, Tron (TRX), of which he holds $237 million worth.

Rain Lhmus, the founder of Estonia-based LHV Bank, is second on Arkhams list with $769 million worth of Ether. However, it has been flagged as inaccessible, as it was reported in November that he lost access to his private key.

Ethereum co-founder Vitalik Buterin comes third with a wallet containing $757 million, primarily in ETH, of which he holds 245,425 tokens.

The Bitcoin wallet of the former chief technology officer at Ripple, Stefan Thomas, shows up as fourth on the list. However, his $442 million worth of Bitcoin (BTC) has also been flagged inaccessible.

Thomas lost access to more than 7,000 Bitcoin in 2011 after losing the password to the encrypted hard drive holding the details of his cryptocurrency.

In October, crypto recovery firm Unciphered offered to unlock the IronKey hard drive.

Related: Ethereums next hard fork could make lost private keys a thing of the past

Crypto venture capitalist James Fickel rounds out the top five with $436 million worth of digital assets in his wallet, according to Arkham.

Other notable crypto wallets in the top ten include early adopter Patricio Worthalter, who holds $219 million worth, primarily in ETH. General Partner at Cluster Capital, Winslow Strong, is listed as holding $27.5 million worth of digital assets.

For those included, the dashboard displays public wallets tagged by Arkham, their current token holdings and balance history, a breakdown of portfolios across chains and historical daily balances, and recent activity.

However, the list doesnt contain whale wallets that are linked to unknown or pseudonymous owners, such as Bitcoins creator Satoshi Nakamoto whose BTC holdings are estimated to be worth tens of billions across multiple addresses, or other crypto titans that currently arent linked to a particular wallet address.

Magazine: 1 in 6 new Base meme coins are scams, 91% have vulnerabilities

See more here:

Arkham's top 5 doxed crypto hodlers own $3.5B, but 35% is untouchable - Cointelegraph

Read More..

Artificial Intelligence (AI) and the Church of Africa: An interview with Fr Joel Nkongolo – Vatican News – English

Congolese national and Claretian priest Fr Joel Nkongolo recently spoke to Fr Paul Samasumo of Vatican News about AIs implications or possible impact on the African Church. Fr Nkongolo is currently based in Nigeria.

Fr Paul Samasumo Vatican City.

How would you define or describe Artificial Intelligence (AI)?

Artificial Intelligence encompasses a wide range of technologies and techniques that enable machines to mimic human cognitive functions. Machine learning, a subset of AI, allows systems to learn from data without being explicitly programmed. For example, streaming platforms like Netflix use recommendation algorithms to analyze users viewing history and suggest relevant content. Computer vision technology, another aspect of AI, powers facial recognition systems used in security and authentication applications.

Should we be worried and afraid of Artificial Intelligence?

While AI offers numerous benefits, such as improved efficiency, productivity, and innovation, it also raises legitimate concerns. One concern is job displacement, as automation could replace certain tasks traditionally performed by humans. For instance, a study by the McKinsey Global Institute suggests that up to 800 million jobs could be automated by 2030. Additionally, there are ethical concerns surrounding AI, such as algorithmic bias, which can perpetuate discrimination and inequality. For example, facial recognition systems have been found to exhibit higher error rates for people with darker skin tones, leading to unfair treatment in areas like law enforcement.

Africas journey towards embracing AI seems relatively slow. Is this a good or bad thing?

Africas adoption of AI has been relatively slow compared to other regions, attributed to factors such as limited infrastructure, digital literacy, and funding. However, this cautious approach can also be viewed as an opportunity to address underlying challenges and prioritize ethical considerations. For example, Ghana recently established two AI Centres to develop AI capabilities while ensuring ethical AI deployment. By taking a deliberate approach, African countries can tailor AI solutions to address local needs and minimize potential negative impacts.

How do you see Artificial Intelligence affecting or impacting the Church in Africa and elsewhere? Should the Church be worried about Artificial Intelligence?

AI can enhance various aspects of Church operations, such as automating administrative tasks, analyzing congregation demographics for targeted outreach, and providing personalized spiritual guidance through chatbots. However, there are ethical considerations, such as ensuring data privacy and maintaining human connection amid technological advancements. For example, sections of the Church of England utilize AI-powered chatbots to engage with congregants online, offering pastoral support and prayer. While AI can augment the Churchs outreach efforts, its essential to maintain human oversight and uphold ethical standards in its use.

How can the Church influence ethical behaviour and good social media conduct?

The Church can leverage its moral authority to promote ethical behaviour and responsible social media use. For instance, Pope Francis has spoken out against the spread of fake news and social media polarisation, emphasizing the importance of truth and dialogue. Additionally, initiatives like Digital Catholicism involve leveraging online media technologies as tools for evangelization while simultaneously spreading the message of faith in cyberspace itself. So, by modelling ethical behaviour and offering guidance on digital citizenship, the Church can foster a culture of respect, empathy, and truthfulness in online interactions.

How can parents, guardians, teachers, parish priests, or pastors help young people avoid becoming enslaved by these technologies?

Adults play a crucial role in guiding young peoples use of technology and promoting healthy digital habits. For example, parents and teachers can educate children about the risks of excessive screen time and the importance of balance in their online and offline activities. They can also set limits on device usage, encourage outdoor play, and foster face-to-face social interactions. Moreover, religious leaders can incorporate teachings on mindfulness, self-discipline, and responsible stewardship of technology into their spiritual guidance, helping young people cultivate a healthy relationship with digital media.

Can individuals and society do anything to protect themselves from potential AI harm or abuse by non-democratic governments?

Individuals and civil society organizations can take proactive measures to safeguard against AI abuse by authoritarian regimes. For example, they can advocate for legislation and regulations that protect digital rights, privacy, and freedom of expression. Tools like virtual private networks (VPNs) and encrypted messaging apps can help individuals circumvent government surveillance and censorship. Moreover, international collaboration and solidarity among democratic nations can amplify efforts to hold oppressive regimes accountable for AI misuse and human rights violations.

What would your advice be to those working in education or schools regarding teaching about AI?

Educators have a vital role in preparing students for the AI-driven future by fostering critical thinking, creativity, and ethical decision-making skills. For example, integrating AI literacy into the curriculum can help students understand how AI works, its societal impacts, and ethical considerations. Projects like Googles AI for Social Good initiative provide educational resources and tools for teaching AI concepts in schools. By empowering students to become responsible AI users and innovators, educators can effectively equip them to navigate the opportunities and challenges of the digital age.

Fr Nkongolo, thank you for your time and help in navigating these issues.

Fr. Paul Samasumo, these examples, comparisons, and statistics illustrate the multifaceted nature of AI and its implications for society, including the Church and education. I hope they provide a comprehensive perspective on these complex issues.

Read the original:
Artificial Intelligence (AI) and the Church of Africa: An interview with Fr Joel Nkongolo - Vatican News - English

Read More..

A.I. Has a Measurement Problem – The New York Times

Theres a problem with leading artificial intelligence tools like ChatGPT, Gemini and Claude: We dont really know how smart they are.

Thats because, unlike companies that make cars or drugs or baby formula, A.I. companies arent required to submit their products for testing before releasing them to the public. Theres no Good Housekeeping seal for A.I. chatbots, and few independent groups are putting these tools through their paces in a rigorous way.

Instead, were left to rely on the claims of A.I. companies, which often use vague, fuzzy phrases like improved capabilities to describe how their models differ from one version to the next. And while there are some standard tests given to A.I. models to assess how good they are at, say, math or logical reasoning, many experts have doubts about how reliable those tests really are.

This might sound like a petty gripe. But Ive become convinced that a lack of good measurement and evaluation for A.I. systems is a major problem.

For starters, without reliable information about A.I. products, how are people supposed to know what to do with them?

I cant count the number of times Ive been asked in the past year, by a friend or a colleague, which A.I. tool they should use for a certain task. Does ChatGPT or Gemini write better Python code? Is DALL-E 3 or Midjourney better at generating realistic images of people?

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

See the rest here:
A.I. Has a Measurement Problem - The New York Times

Read More..

BrightNight Unveils PowerAlpha Artificial Intelligence Software Platform at the "AI: Powering the New Energy Era … – PR Newswire

Trailblazing integrated AI platform ushers in an era of firm, affordable, clean energy.

WEST PALM BEACH, Fla., April 17, 2024 /PRNewswire/ -- BrightNight, the next-generation global renewable power producer built to deliver clean and dispatchable solutions, today introduced PowerAlpha, the proprietary software platform that uses cutting-edge Artificial Intelligence, data analytics, and cloud computing to design, optimize, and operate renewable power plants with industry-leading economics.

PowerAlpha was unveiled today at the "AI: Powering the New Energy Era"summit in Washington, D.C. Sponsored by BrightNight and hosted by the Foundation for American Science and Technology, it is the first independent industry summit in North America exclusively dedicated to exploring the influence of Artificial Intelligence on the energy sector. The summit brought together senior public officials, policymakers, energy industry leaders, experts, academia, and investors, including co-sponsors NVIDIA, IBM, C3.ai and Qcells, and representatives from the Department of Energy, U.S. Congress, Intel, Alphabet (Google), EEI, Bank of America, KPMG, EPRI, and other organizations.

BrightNight's PowerAlpha platform accelerates the global clean energy transition and decarbonization by ensuring the generation of reliable power at the lowest cost attainable in various geographies and power grids globally. Its benefits as realized in the design, optimize, and operate stages within the project's lifecycle include:

PowerAlpha's unique dispatch hybrid controls interface can also be integrated with utilities' Energy Management Systems for even more operational efficiency and integrated asset management solutions. Also, with the increasing demand for more power to support the growing use of AI applications and corresponding data centers, PowerAlpha is leveraging AI capabilities to drive greater efficiencies, better designs, and higher capacity renewable projects.

Martin Hermann, CEO of BrightNight, said: "Through relentless innovation, the dream of a sustainable clean energy transition, where renewable power is affordable and reliable, is now within reach. The integration of Artificial Intelligence with renewable energy offers solutions to numerous challenges, from demand forecasting and smart grid management to labor shortages and the design of efficient power projects. I am proud that BrightNight is at the forefront of this technological revolution, bringing the clean energy transition closer to reality than ever before."

Kiran Kumaraswamy, CTO of BrightNight, said: "PowerAlpha provides a fully integrated platform that is highly differentiated in the marketplace in its ability to design, optimize, and operate hybrid renewable power projects with cutting edge capabilities enabled by AI and utilizing patent-pending algorithms. Our team is redefining industry standards, optimizing renewable projects globally to integrate new load like data centers and increase utilization of existing infrastructure."

BrightNight has delivered a number of PowerAlpha projects and use cases in the U.S. and globally, where it is helping customers optimize existing transmissions infrastructure, integrate long and short duration storage solutions, increase capacity, and improve dispatchability of renewable assets, as well as repower existing projects.

BrightNight's is also using PowerAlpha to develop projects with its partner in India and the Philippines, ACEN, one of Asia's leading renewable companies. ACEN Group CIO Patrice Clausse said, "PowerAlpha has been an important enabler for our investment decision-making. We feel confident leveraging its capabilities to simulate the performance of numerous hybrid renewable power plant configurations incorporating solar, wind, and energy storage and identify the most efficient configuration. This has enabled us to optimize our plant configuration and was a critical component in our recent tenderwins."

For more information about PowerAlpha and BrightNight's 37 GW renewable power portfolio, please see http://www.brightnightpower.com/poweralpha or contact [emailprotected].

About BrightNight

BrightNight is the first global renewable integrated power company designed to provide utility and commercial and industrial customers with clean, dispatchable renewable power solutions. BrightNight works with customers across the U.S. and Asia Pacific to design, develop, and operate safe, reliable, large-scale renewable power projects optimized through its proprietary software platform PowerAlpha to better manage the intermittent nature of renewable energy. Its deep customer engagement process, team of proven power experts, and industry-leading solutions enable customers to overcome challenging energy sustainability standards, rapidly changing grid dynamics, and the transition away from fossil fuel generation. To learn more, please visit:www.brightnightpower.com

SOURCE BrightNight

See original here:
BrightNight Unveils PowerAlpha Artificial Intelligence Software Platform at the "AI: Powering the New Energy Era ... - PR Newswire

Read More..

Artificial Intelligence Feedback on Physician Notes Improves Patient Care – NYU Langone Health

Artificial intelligence (AI) feedback improved the quality of physician notes written during patient visits, with better documentation improving the ability of care teams to make diagnoses and plan for patients future needs, a new study finds.

Since 2021, NYU Langone Health has been using pattern-recognizing, machine-learning AI systems to grade the quality of doctors clinical notes. At the same time, NYU Langone created data informatics dashboards that monitor hundreds of measures of safety and the effectiveness of care. The informatics team over time trained the AI models to track in dashboards how well doctors notes achieved the 5 Cs: completeness, conciseness, contingency planning, correctness, and clinical assessment.

Now, a new case study, published online April 17 in NEJM Catalyst Innovations in Care Delivery, shows how notes improved by AI, in combination with dashboard innovations and other safety initiatives, resulted in an improvement in care quality across four major medical specialties: internal medicine, pediatrics, general surgery, and the intensive care unit.

This includes improvements across the specialties of up to 45 percent in note-based clinical assessments (that is, determining diagnoses) and reasoning (making predictions when diagnoses are unknown). In addition, contingency planning to address patients future needs saw improvements of up to 34 percent.

Last year, NYU Langone added to this long-standing effort a newer form of AI that develops likely options for the next word in any sentence based on how billions of people used language on the internet over time. A result of this next-word prediction is that generative AI chatbots like GPT-4 can read physician notes and make suggestions.In a pilot within the case study, the research team supercharged their machine-learning AI model, which can only give physicians a grade on their notes, by integrating a chatbot that added an accurate written narrative of issues with any note.

The NYU Langone case study also showed that GPT-4 or other large language models could provide a method for assessing the 5Cs across medical specialties without specialized training in each. Researchers say that the generalizability of GPT-4 for evaluating note quality supports its potential for application at many health systems.

Our study provides evidence that AI can improve the quality of medical notes, a critical part of caring for patients, said lead study author Jonah Feldman, MD, medical director of clinical transformation and informatics within NYU Langones Medical Center Information Technology (MCIT) Department of Health Informatics. This is the first large-scale study to show how a healthcare organization can use a combination of AI models to give note feedback that significantly improves care quality.

Poor note quality in healthcare has been a growing concern since the enactment of the Health Information Technology for Economic and Clinical Health (HITECH) Act in 2009. The act gave incentives to healthcare systems to switch from paper to electronic health records (EHR), enabling improved patient safety and coordination between healthcare providers.

A side effect of EHR adoption, however, has been that physician clinical notes are now four times longer on average in the United States than in other countries. Such note bloat has been shown to make it harder for collaborating clinicians to understand diagnoses described by their colleagues, say the study authors. Issues with note quality has been shown in the field to lead to missed diagnoses and delayed treatments, and there is no universally accepted methodology for measuring it. Further, evaluation of note quality by human peers is time-consuming and hard to scale up to the organizational level, the researchers say.

The effort captured in the new NYU Langone case study outlines a structured approach for organizational development of AI-based note quality measurement, a related system for process improvement, and a demonstration of AI-fostered clinician behavioral change in combination with other safety programs. The study also details how AI-generated note quality measurement helped to foster adoption of standard workflows, a significant driver for quality improvement.

Each of the four medical specialties that participated in the study achieved the institutional goal, which was that more than 75 percent of inpatient history and physical exams and consult notes, were being completed using standardized workflows that drove compliance with quality metrics. This represented an improvement from the previous share of less than 5 percent.

Our study represents the founding stage of what will undoubtedly be a national trend to leverage cutting-edge tools to ensure clinical documentation of the highest qualitymeasurably and reproducibly, said study author Paul A. Testa, MD, JD, MPH, chief medical information officer for NYU Langone. The clinical note can be a foundational toolif accurate, accessible, and effectiveto truly influence clinical outcomes by meaningfully engaging patients while ensuring documentation integrity.

Along with Dr. Feldman and Dr. Testa, the current studys authors from NYU Langone were Katherine Hochman, MD, MBA, Benedict Vincent Guzman, Adam J. Goodman, MD, and Joseph M. Weisstuch, MD.

Greg Williams Phone: 212-404-3500 Gregory.Williams@NYULangone.org

Read more:
Artificial Intelligence Feedback on Physician Notes Improves Patient Care - NYU Langone Health

Read More..

Small Businesses Face Uphill Battle in AI Race, Says AI Index Head – PYMNTS.com

Small and medium-sized businesses will struggle to keep pace with tech giants like OpenAI in developing their own artificial intelligence (AI) models, according to a new report from Stanford University.

In an interview,Nestor Maslej, the editor-in-chief of Stanfords newly released 2024 AI Index Report, highlighted the studys findings on the growing AI divide between large and small companies. While tech behemoths pour billions into AI R&D, smaller firms lack the resources and talent to compete head-on.

A small or even medium-sized business will not be able to train a frontier foundation model that can compete with the likes of GPT-4, Gemini or Claude, Maslej said. However, there are some fairly competent open-source models, such as Llama 2 and Mistral, that are freely accessible. A lot can be done with these kinds of open-source models, and they are likely to continue improving over time. In a few years, there may be an open, relatively low-parameter model that works as well as GPT-4 does today.

Astudy from PYMNTS last year highlighted that generative AI technologies such as OpenAIs ChatGPT could significantly enhance productivity, yet they also risk disrupting employment patterns.

A major takeaway from the report is the possible disconnect between AI benchmarks and actual business requirements in the real world.

To me, it is less about improving the models on these tasks and more about asking whether the benchmarks we have are even well-suited to evaluate the business utility of these systems, Maslej stated. The current benchmarks may not be well-aligned with the real-world needs of businesses.

The report indicated that while private investment in AI generally declined last year, funding for generative AI experienced a dramatic surge, growing nearly eightfold from 2022 to $25.2 billion. Leading players in the generative AI industry, including OpenAI, Anthropic, Hugging Face and Inflection, reported substantial increases in their fundraising efforts.

Maslej highlighted that while the costs of adopting AI are considerable, they are overshadowed by the expenses associated with training the systems.

Adoption is less of a cost problem because the real cost lies in training the systems. Most companies do not need to worry about training their own models and can instead adopt existing models, which are available either freely through open source or through relatively cost-accessible APIs, he explained.

The report also calls for standardized benchmarks in responsible AI development. Maslej imagines a future where common benchmarks allow businesses to easily compare and choose AI models that match their ethical standards. Standardization would make it simpler for businesses to more confidently ascertain how various AI models compare to one another, he stated.

Balancing profit with ethical concerns emerges as a key challenge. The report shows that while many businesses are concerned about issues like privacy and data governance, fewer are taking concrete steps to mitigate these risks. The more pressing question is whether businesses are actually taking steps to address some of these concerns, Maslej noted.

Measuring AIs impact on worker productivity across different industries remains complex. It is possible to measure productivity within various industries; however, comparing productivity gains across industries is more challenging, Maslej said.

Looking ahead, the report highlights the need for businesses to navigate an increasingly complex regulatory landscape. On Tuesday, Utah Sen. Mitt Romney and several Senate colleagues unveiled a plan to guard against the potential dangers of AI. These include threats in biological, chemical, cyber and nuclear areas by increasing federal regulation of advanced technological developments.

Maslej emphasized the importance of staying vigilant. Navigating this issue will be challenging. The regulatory standards for AI are still unclear.

As public awareness of AI grows, Maslej believes that businesses must address concerns about job displacement and data privacy. As people become more aware of AI, how can businesses proactively address nervousness, especially regarding job displacement and data privacy? he posed as a crucial question for the industry to consider.

The 2024 AI Index Report is meant to guide businesses and society in navigating the rapid advancements in artificial intelligence. Maslej concluded, The AI landscape is evolving at an unprecedented pace, presenting both immense opportunities and daunting challenges.

Go here to see the original:
Small Businesses Face Uphill Battle in AI Race, Says AI Index Head - PYMNTS.com

Read More..

NSA Warns of AI Cybersecurity Risks, Urges Businesses to Bolster Defenses – PYMNTS.com

The National Security Agency (NSA) is sounding the alarm on the cybersecurity risks posed by artificial intelligence (AI) systems, releasing new guidance to help businesses protect their AI from hackers.

As AI increasingly integrates into business operations, experts warn that these systems are particularly vulnerable to cyberattacks. The NSAs Cybersecurity Information Sheetprovides insights into AIs unique security challenges and offers steps companies can take to harden their defenses.

AI brings unprecedented opportunity but also can present opportunities for malicious activity. NSA is uniquely positioned to provide cybersecurity guidance, AI expertise, and advanced threat analysis, NSA Cybersecurity Director Dave Luber said Monday (April 15) in anews release.

The report suggested that organizations using AI systems should put strong security measures in place to protect sensitive data and prevent misuse. Key measures include conducting ongoing compromise assessments, hardening the IT deployment environment, enforcing strict access controls, using robust logging and monitoring and limiting access to model weights.

AI is vulnerable to hackers due to its complexity and the vast amounts of data it can process,Jon Clay, vice president of threat intelligence at the cybersecurity company Trend Micro,told PYMNTS. AI is software, and as such, vulnerabilities are likely to exist which can be exploited by adversaries.

Asreported by PYMNTS, AI is revolutionizing how security teams approach cyber threats by accelerating and streamlining their processes. Through its ability to analyze large datasets and identify complex patterns, AI automates the early stages of incident analysis, enabling security experts to start with a clear understanding of the situation and respond more quickly.

Cybercrime continues to rise with the increasing embrace of a connected global economy. According to an FBI report, the U.S. alone saw cyberattack losses exceed $10.3 billion in 2022.

AI systems are particularly prone to attacks due to their dependency on data for training models, according to Clay.

Since AI and machine learning depend on providing and training data to build their models, compromising that data is an obvious way for bad actors to poison AI/ML systems, Clay said.

He emphasized the risks of these hacks, explaining that they can lead to stolen confidential data, harmful commands being inserted and biased results. These issues could upset users and even lead to legal problems.

Clay also pointed out the challenges in detecting vulnerabilities in AI systems.

It can be difficult to identify how they process inputs and make decisions, making vulnerabilities harder to detect, he said.

He noted that hackers are looking for ways to get around AI security to change its results, and this method is being talked about more in secret online forums.

When asked about measures businesses can implement to enhance AI security, Clay emphasized the necessity of a proactive approach.

Its unrealistic to ban AI outright, but organizations need to be able to manage and regulate it, he said.

Clay recommended adopting zero-trust security modelsand using AI to enhance safety measures. This method means AI can help analyze emotions and tones in communications and check web pages to stop fraud. He also stressed the importance of strict access rules andmulti-factor authenticationto protect AI systems from unauthorized access.

As businesses embrace AI for enhanced efficiency and innovation, they also expose themselves to new vulnerabilities, Malcolm Harkins, chief security and trust officer at the cybersecurity firm HiddenLayer, told PYMNTS.

AI was the most vulnerable technology deployed in production systems because it was vulnerable at multiple levels, Harkins added.

Harkins advised businesses to take proactive measures, such as implementing purpose-built security solutions, regularly assessing AI models robustness, continuous monitoring and developing comprehensive incident response plans.

If real-time monitoring and protection were not in place, AI systems would surely be compromised, and the compromise would likely go unnoticed for extended periods, creating the potential for more extensive damage, Harkins said.

See more here:
NSA Warns of AI Cybersecurity Risks, Urges Businesses to Bolster Defenses - PYMNTS.com

Read More..