Page 1,293«..1020..1,2921,2931,2941,295..1,3001,310..»

Toggled Survey: Businesses Grapple with Data Analytics in the … – Datanami

TROY, Mich., May 19, 2023 Toggled, a wholly owned subsidiary of Altair focused on intelligent building management solutions, released survey results revealing that while many businesses are embracing smart building technology to increase energy efficiency and lower costs, they fall short of their goals due to an inability to master the data. The independent survey of more than 500 facility decision-makers throughout several industries gauged smart building technology adoption, its business benefits, and its impact on sustainability initiatives.

Its clear from our survey the battle for smart building adoption has been won. But we cant stop there. By helping facilities establish a complete tech-to-intelligence loop, they can fully optimize energy and cost efficiencies, said Daniel Hollenkamp Jr., chief operating officer, Toggled. Most businesses are only scratching the surface of what the technology can do. Once facility managers start to capitalize on what their performance data is telling them and where to adjust, the true business value of a scalable and flexible IoT-enabled smart building network comes to life.

Adoption Is High and Strides Are Being Made

According to the survey, as many as 78% of respondents have deployed smart building features, and the same percentage of this group (78%) has seen an increase in energy efficiency and cost reduction as a result of using the technology.

While the results demonstrate businesses are making great progress in lowering energy use and costs, nearly two-thirds of facility decision makers (64%) say they are still looking for ways to monitor and analyze their carbon footprint or greenhouse gas emissions in their facilities. Only 36% have seen measurable results in decarbonization.

Verification and Analysis are Low, Inhibiting Further Progress

Most respondents (94%) who have implemented smart building technology say their organization has invested in data analytics. However, as many as 34% say they lack the talent and skills to integrate data science into their smart building platforms. This underscores the importance of businesses choosing a system capable of customizing analytics to the facility and providing simple actionable information in a timely manner. These features enable users to monitor building performance based on real-time data insights, directly from web-based devices without extensive coding or data science knowledge. By doing so, organizations can gain awareness into how these systems are optimizing performance while also lowering their carbon footprint.

More key findings from the survey include:

So many of the issues underscored in the survey results are something we see every day and why we developed our solution with so much intention around simplicity, customization, and integration, said Hollenkamp. By giving users a customizable data analytics dashboard that generates real-time overviews on building efficiencies and anomalies, they gain complete control over their smart building environment so they can optimize performance.

The national survey was commissioned by Toggled and conducted by Atomik Research between April 13 and April 19, 2023. The survey drew responses from 505 U.S. decision-makers who hold authority over their organizations facilities throughout several target industries, including construction, financial services, information services, manufacturing, healthcare, education, real estate, agriculture, restaurant/food and beverage, transportation, energy, hospitality, and tourism.

About Toggled

Toggled and Toggled iQ are registered trademark brands of Ilumisys, Inc. (dba Toggled), a wholly-owned subsidiary of Altair (Nasdaq: ALTR). Toggled iQ is a networked lighting and building control system that leverages the Internet of Things, enabling customers to create unique and scalable solutions across many use cases including, lighting control, HVAC, remote sensor monitoring, and intelligent building control.

About Altair

Altair (Nasdaq: ALTR) is a global leader in computational science and artificial intelligence (AI) that provides software and cloud solutions in simulation, high-performance computing (HPC), data analytics, and AI. Altair enables organizations across all industries to compete more effectively and drive smarter decisions in an increasingly connected world all while creating a greener, more sustainable future.

Source: Altair

Visit link:

Toggled Survey: Businesses Grapple with Data Analytics in the ... - Datanami

Read More..

Why Investing in Employee Training Think Data Science … – Acceleration Economy

In episode 72 of the Data Modernization Minute, Wayne Sadin explains why employee training is as critical for organizational success as new data tools.

Interested in hearing practitioner and platform insights on how solutions such as ChatGPT will impact the future of work, customer experience, data strategy, and cybersecurity? Then make sure to sign up registration is free for Acceleration Economys Generative AI Digital Summit, which takes place on May 25th.

00:50 Waynes topic today is investment. Waynes looked at a lot of tools in the last few months. He recommends that organizations take some of the money theyre going to spend on tools and invest in training for their teams.

01:32 A common scenario is Were going to buy this cool tool. It solves our problem. Now weve used our budget up. How about making people better at change? How about giving teams new tools for process analytics? For data science, for statistics? Refresh old bachelors or masters degrees.

02:38 People can be taught how to learn more effectively, how to sell, and how to talk about ideas and get them across to people. These are all great skills.

03:04 What if organizations took some of that IT budget and brought an IT person and their corresponding business person or business people (their customers, if you will) together, and trained them as a team?

03:19 Tools are fun. Tools are effective, and organizations shouldnt stop investing in them. But take some of that money and think about how to upgrade peoples skill sets.

Looking for more insights into all things data? Subscribe to the Data Modernization channel:

Read more from the original source:

Why Investing in Employee Training Think Data Science ... - Acceleration Economy

Read More..

A Beginners Guide to Anomaly Detection Techniques in Data Science – KDnuggets

Anomaly Detection is a very important task that you can meet or youll meet in the future eventually if you are dealing with data. Its very applied in many fields, like manufacturing, finance and cybersecurity.

Getting started with this topic for the first time can be challenging by yourself without a guide that orients you step by step. In my first experience as a data scientist, I remember that I struggled a lot to be able to master this discipline.

First of all, Anomaly Detection involves the identification of rare observations with values that deviate drastically from the rest of the data points. These anomalies, often called outliers, are a minority, while most of the items belong to the normal class. This means that we are dealing with an imbalanced dataset.

Another challenge is that most of the time there is no labelled data when working in the industry and its challenging to interpret the predictions without any target. This means that you cant use evaluation metrics typically used for classification models and you need to undertake other methods to interpret and trust the output of your model. Lets get started!

Anomaly detection refers to the problem of finding patterns in data that do not conform to expected behavior. These nonconforming patterns are often referred to as anomalies, outliers, discordant observations, exceptions, aberrations, surprises, peculiarities, or contaminants in different application domains. Credit Anomaly Detection: A Survey

This is a good definition of anomaly detection in a few words. Anomalies are often associated with errors obtained during data collection and, then, they finish to be eliminated. But there are also cases when there are new items with a completely different variability compared to the rest of the data and there is a need for appropriate approaches to recognize this type of observation. The identification of these observations can be very useful for making decisions in companies operating in many sectors, such as finance and manufacturing.

There are three main types of anomalies: point anomalies, contextual anomalies and collective anomalies.

As you may deduce, point anomalies constitute the simplest case. It happens when a single observation is anomalous compared to the rest of the data, so its identified as an outlier/anomaly. For example, lets suppose that we want to make credit card fraud detection in the transactions of clients in a bank. In that case, a point anomaly can be considered a fraudulent activity of a client.

Another case of anomaly can be a contextual anomaly. You can meet this type of anomaly only in a specific context. An example can be the summer heat waves in the United States. You can notice that there is a huge spike in 1930, which represents an extreme event that happened in the United States, called Dust Bowl. Its called that way because it was a period of dust storms that damaged the south-central United States.

The third and last type of anomaly is the collective anomaly. The most intuitive example is to think about the absence of precipitations we are having this year from months in Italy. If we compare the data in the last 50 years, there havent ever been similar behaviours. The single data instances in an anomalous collective may not be identified as outliers by themselves, but all these data points together indicate a collective anomaly. In this context, a single day without precipitation is not anomalous by itself, while a lot of days without precipitation can be considered anomalous compared to the data of previous years.

There are several approaches that can be applied to anomaly detection:

In an unsupervised setting, there are no evaluation metrics that can help you to understand the rate of correct positive predictions (precision) or the rate of the actual positives (recall).

Without any possibility of evaluating the performance of the model, its more important than ever to provide an explanation of model predictions. This can be achieved by using interpretability approaches, like SHAP and LIME.

There are two possible interpretations: global and local. The aim of global interpretability is to provide explanations of the model as a whole, while the local interpretability aims at explaining the model prediction of a single instance.

I hope you found useful this fast overview of anomaly detection techniques. As you have noticed, its a challenging problem to solve and the suitable technique changes depending on the context. I also should highlight that its important to make some explorative analysis before applying any anomaly detection model, like PCA to visualize the data in a lower dimensional space and boxplots. If you want to go deeper, check the resources below. Thanks for reading! Have a nice day!

Eugenia Anello is currently a research fellow at the Department of Information Engineering of the University of Padova, Italy. Her research project is focused on Continual Learning combined with Anomaly Detection.

Here is the original post:

A Beginners Guide to Anomaly Detection Techniques in Data Science - KDnuggets

Read More..

5 Founders on the Future of Data – Andreessen Horowitz

Its well understood that artificial intelligence is advancing like mad right now. Whats less appreciated is the role that data and infrastructure continue to play in these advances whether thats adding new data sources to train better models, building the data infrastructure to support AI workloads, or taking advantage of more powerful hardware to do all sorts of new things. And, of course, lost in all the excitement around AI is the fact that good, old-fashioned data analysis is still a major enterprise workload and continues to see its own fair share of innovation.

We recently held our Data and AI Forum in New York City, featuring talks from a collection of our founders and other leaders in the space about where the world of data is heading. Here are some highlights (edited for readability) from the founders building products across the spectrum of use cases.

Our most fulfilled, amazing days as humans are the days that we are spending doing creative and interesting work and not doing the tedious drudgery stuff. And I think AI is here to help us achieve that state of fulfillment.

Ive been working in data, data science, and data analytics my whole career. I am now the founder and CEO of a company that builds a data science and analytics tool, and our product is used by thousands of data practitioners every day. And we see them do some really creative, interesting stuff.

I think data practitioners are creatives. I know its not the first thing that comes to mind when I say creatives, I think of artists or whatever but think about what data scientists do in their day. Theyre asking questions, theyre forming hypotheses, theyre testing new things, theyre building narratives, theyre taking risks, theyre telling stories. This is good data science, its good data analytics. And its what we expect from our data teams. Its an art and a science and a great use of human time.

But data work can also be really tedious. You spend a lot of time writing boilerplate and fixing dependencies and tracking down missing parentheses in a query. It can be more plumbing than science sometimes. This is where I think people wind up spending a lot of their time, and it really is a blocker to them doing their best work. So this really feels like a perfect opportunity to bring human-computer symbiosis into this creative profession.

Now, most people, when they think of this, assume it means just replacing data teams with a magic insights text box. Like, the next step is well all buy solutions into which our stakeholders or executives will come in, theyll write a question, and itll give them a magic response back. You know: properly formatted charts and well-reasoned explanations and full business context. But that doesnt really work.

And it doesnt work, one, because these models arent perfect. They can hallucinate, theyre missing a lot of context, they dont understand the full situation of things. But also that humans want to be able to hear a story, and understand, and ask and answer questions of a human around these things.

Even though AI has this power to enable us to get more value out of our content, its really challenging to do that. Theres no such thing as a free lunch. And I think that there are four main challenges that prevent organizations and businesses from really being able to unlock this data right now.

The first one is scale. When we think about unstructured text and visual data, its orders of magnitude larger than todays big data. So to put that into perspective: If we were to think about tabular data, say we had 10 million rows of tabular data, thats around 40 megabytes. And to put that into perspective, we can think of that as being like [the surface] area of Lake Tahoe in California, which is around 496 square kilometers.

If we were to think about 10 million text documents, we go from 40 megabytes to 40 gigabytes. And now we have something thats more on the scale of the Caspian Sea 371,000 square kilometers of space. Its three orders of magnitude more data, in terms of volume, than when we think about tabular data.

And then when we think about visual data, if we had 10 million images, that would be 20 terabytes of data. Thats another three orders of magnitude bigger. Thats like the Pacific Ocean in terms of the sheer scale of data volume. . . .

Right now, when we think about big data or data lakes, we have these tools and vehicles that can process that efficiently. But thats kind of like having a rowboat or a canoe: Itll get you across a lake, but I wouldnt trust that if youre trying to cross the Pacific Ocean.

In order to actually be able to unlock the value from this richer, more contextual data that we get with content, we actually need to create tools and infrastructure to process that. Its going to be probably a similar shape in terms of how a seagoing boat looks somewhat similar to a rowboat but the scale and the processing of it will have to be completely different. Well need to prepare ourselves for the sheer volume and scale that were thinking about when we move from a tabular view of the world to more of a content view of the world.

I think one really interesting thing thats happening and is changing the way systems need to be architected is that what is considered big data is actually increasing. When Google came out with the MapReduce paper 2004, there were a lot of workloads that you had to spread across multiple machines because machines were pretty small. Like, the first AWS instances had a gigabyte of RAM and one CPU.

Now, [you can rent AWS instances with hundreds of processors and terabytes of RAM]. There are very few workloads that wont fit into that amount of hardware. . . .

I think theres a bunch of things that have to be true in order for you to really need big data systems: Youve got lots of data. You need it all. You do need it all at once. The amount youre using doesnt fit on a machine. You cant get rid of that data and you cant summarize it. OK, then you need some fancy scale-out system.

So what does the world look like if data size isnt the primary driver of your architecture? What are some things that you can do about it? One is: Dont be afraid to scale up. Scale-up became a dirty word, I guess, once Google published the MapReduce paper. Everybodys building these large-scale distributed systems but, actually, scale-up works really well if you clean up after your data. Just good data hygiene can get you pretty far.

Another interesting one: If you have smaller data, you can push some of that out to the user. When we built BigQuery, one of the things we said was that, with large data, you want to move the compute to the data rather than the data to the compute. Laptops used to be synonymous with underpowered, but, these days, M2 Macbooks are basically supercomputers. If you have smaller data sets, why not push the workloads out to them? . . . Its a lot less expensive to do locally than it is to do in the cloud.

Theres this Cambrian explosion of new data sources and new applications every single year . . . And what that creates, of course, is data silos. You now have your most valuable data in a variety of different database systems, and it creates a lot of vendor lock-in because many of these systems are proprietary in nature, which means you can only access that data through that particular system.

So, this notion of centralizing your data, that model is much slower than it looks because you have to move all of the data out of all these different systems and get it into one place before you can do analysis. It limits your view to what is in that enterprise data warehouse, which is never the complete truth about your business. You always have data in other places. And take it from me spending time at Teradata not one of their customers had all of their data in Teradata, its just not possible.

And, of course, proprietary lock-in and it can become very expensive. And that was really the challenge for many of these early databases: Oracle, Teradata, IBM DB2. Theyre not bad databases by any stretch of the imagination. Even today, I would argue Teradata is a better database than Snowflake. But the market is moving away from them and thats because its incredibly expensive and customers feel locked in.

So, [the idea that you need to centralize] your data: not true, and also impossible. The truth is you need to optimize for decentralized data.

Most of the AI systems that are being trained today are trained on these public datasets, mostly data crawled from the web. And I think theres actually still a decent amount of public data available. Even if were reaching the limits, say, of text, there are other modalities that folks are starting to explore audio, video, images. I think theres a lot of really rich data sources out there, still, on the web.

There are also I dont know the exact magnitudes, but I imagine roughly a similar scale of private datasets out there. And I think thats going to be really important in certain applications. Imagine if you have a cogeneration system, its great that its trained on all of public GitHub, but it might be even more useful if its trained on my own private code base. I think figuring out how to blend these public and private datasets is going to be really interesting. And I think its going to open up a whole bunch of new applications, too.

From Characters perspective, and I guess more generally, one of the things that were starting to see that is pretty exciting is this move from, you could call it, static datasets data that exists already out there, independent of AI systems. Were moving now, I think, toward data sets that are being built with AI in the loop. And so you have what people often refer to as data flywheels. You can imagine, say, for Character, we have all these rich interactions where a character is having a conversation with someone, and we get feedback on that conversation from the user, either explicitly or implicitly, and thats really the perfect data to use to make that AI system better.

And so we have these loops that I think are going to be really exciting and provide both richer and, perhaps, much larger data sources for the sort of next generation of systems.

* * *

The views expressed here are those of the individual AH Capital Management, L.L.C. (a16z) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.

This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investments/.

Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.

Read more here:

5 Founders on the Future of Data - Andreessen Horowitz

Read More..

Using Data Science to Get New Treatments to Patients Faster – Johnson & Johnson

Share

Did you like reading this story? Click the heart to show your love.

Sid Jain, a leader on the pharmaceutical R&D data science team at Johnson & Johnson

Twenty years ago, I was living in Boston and wrapping up my college studies in computer science when I was struck by a constellation of uncomfortable and alarming symptoms. I knew that frequent bouts of abdominal pain, diarrhea and bleeding werent normal so I sought help, and found out I had Crohns disease, a type of inflammatory bowel disease (IBD) that causes chronic inflammation of the digestive tract.

In a way, I was lucky because I was diagnosed relatively quickly and had some of the best care available; some people with IBD wait years before they get the right help. But thats not to say my path to healing was an easy one. For a while I had to take steroidspowerful anti-inflammatory drugs that control IBD flares but come with potential side effects. While I was taking them my skin broke out, I struggled with insomnia and had frequent headaches.

Then I started a biologic drug, which can work in part by suppressing the bodys hyperactive immune response. That makes sense when you have an autoimmune disease like Crohns, but it can increase your chances of having side effects, including infectionsin fact, I ended up getting an infection, and only after several weeks of sick leave and almost nine months on antibiotics was I able to fully clear it.

Then for about five years, starting in 2009, my Crohns disease left me in constant abdominal pain and I had to stay within close proximity to a bathroom at all times. I was also losing a lot of blood, which caused my hemoglobin levels to drop dangerously low. I remember being so lethargic Id drag through the workday and go for iron infusions about once a month. My bosses at the time were supportive, but my health was clearly taking a toll on my career. I even thought about taking time off and going on long-term disability. It took a toll on my family as well, and I couldnt have made it through that phase without the love and support of my family and friends, especially my wife.

What ultimately helped me the most was a series of surgeries. Between 2009 and 2014, I had three bowel resections to remove damaged portions of my colon. During the last procedure, nearly my entire large intestine was removed. These surgeries were painful and were followed by wound infections, putting me out of work for six to eight weeks at a time.

Whenever I hear about a novel drug in development or the possibility of using an existing drug to treat another disease, I get excited, because it ultimately means that patients may have more choices.

Did you like reading this story? Click the heart to show your love.

Ive been fortunate to lead a relatively normal life since then. Yet even now I require medication to stay in remission. Its helped a lot, but its not perfect. I still deal with urgency, which can make daily life, as well as travelwhich I lovepretty difficult. Perhaps most importantly, I remain immunosuppressed, so living through the COVID-19 pandemic has been especially scary.

Did you like reading this story? Click the heart to show your love.

Medical illustration of Crohn's disease

IBD patients like me deserve better, as do those withother chronic diseases who are struggling to manage their condition. Whenever I hear about a novel drug in development or the possibility of using an existing drug to treat another disease, I get excited because it ultimately means that patients may have more choices.

There are some great IBD treatments on the market, butas with any treatmentthey dont work for everyone. While we keep moving toward the goal of finding a cure, there needs to be continued research on treatments that address symptoms and get more people into remission with fewer side effects and less invasive therapies.

Given my health history, healthcare has always been a passion for me and has been a career of choice for 20 years. About three years ago I joined the Janssen Pharmaceutical Companies of Johnson & Johnson as Head of Data Science for Global Development within our R&D Data Science & Digital Health team. While my team and I are not actually creating new drugs in a lab or testing them on patients, we play a crucial role in helping to accelerate clinical trials so that patients can get the help they need faster.

One of the major focus areas in clinical trials is patient recruitment. In general, only 3 to 4% of patients who are eligible for a given clinical trial actually participate in one. And, importantly, that percentage does not always reflect the diverse patient communities impacted by the disease. Making sure trial participants mirror the population with the disease helps ensure we create medicines that work for everyone.

I'm part of a team that is leveraging the power of data science to help drive patient recruitment and site selection efforts for clinical trialsincluding in the IBD space.

Using anonymized medical records and the latest methodologies, includingartificial intelligence and machine learning, we are able to pinpoint sites with patients who may be able to participate in clinical trials. We can then analyze historical site and investigator performance data to determine which sites would be best positioned to participate in our trials. These recommendations are then used by our clinical trials operations colleagues to inform site engagement and, ultimately, patient recruitment.

The end goal? Bringing treatments to patients faster. And this new approach is already paying off, helping to expedite recruitment and clinical trial timelines.

I appreciate the opportunity to work toward making life better for all patients, and I feel immense gratitude whenever my team and our Janssen colleagues are proactively focused on a project related to IBD. Identifying new trial sites and enrolling participants faster isnt merely convenient; its propelling research forward more quickly than before and, in doing so, bringing new hope to patientslike me.

See the article here:

Using Data Science to Get New Treatments to Patients Faster - Johnson & Johnson

Read More..

How Blockchain Technology is Transforming the Business … – Data Science Central

Blockchain is a revolutionary technology, which is promising businesses in reducing risk and maintaining data transparency, privacy, and security. Blockchain has several opportunities that the business can utilize to improve business processes. Data privacy is highly important and a top concern for any business therefore several of them are trying to use blockchain in their process.

Several companies and enterprises are adopting blockchain to improve the overhaul of existing business operations and data handling. Industries such as healthcare, telecom, supply chain, and IT are integrating Blockchain to streamline business processes This article will be exploring the impact of Blockchain on business.

Blockchain is the 5th ranked technology in the world. Blockchain has brought many changes in financial, healthcare, banking, accounting, and other sections irrespective of size, nature, and even geographical location. Blockchain is a list of records that is growing and linked using cryptography, a security standard in database management. Each record is known as a block and it is connected with the previous block. Transaction data is time-stamped and secure between two parties since it creates resistance to the modification of data. Blockchain is secure because it is managed by a peer-to-peer network and blockchain records are not unalterable.

Blockchain will improve the industry standards and the structure of global business using digital currencies. According to global experts, the lack of international standards is a big obstruction to the worldwide adoption of Blockchain technologies. Industries are reinforcing these necessary standards and transforming small, mid, and large-size businesses. We have researched and evaluated the following sectors and are significantly implementing this new technology to improve and optimize their operations.

Blockchain has made major changes in the financial industry. Financial companies can make faster transactions since it has changed the way of banking services. It provides a way for unknown and untrusted members to satisfy the state of the transaction database without using any middleman. Transactions are now more speedy and secure for consumers and businesses. Major payment networks, stock exchanges, and money transfer services are using Blockchain in reducing transaction fees and making faster and more secure payment transactions.

Accounting is a challenging task for any business. Several accounting firms are using blockchain to manage complex tax codes, invoices, bills, and income tax files which are in paramount need of accuracy and precision. Accounting professionals use new tools to audit and validate transactions and save time. Rubix is one of the best Blockchain software for accounting.

Blockchain could be used in HR and Resource management fields. Hiring professionals can use Blockchain to quickly verify credentials provided by candidates and employees. It can also help in processing inaccurate data and prevent third-party companies. MNCs and large organizations where thousands of employees are working across the country need a payroll system. Blockchain-based systems and tools can simplify payroll and payment transactions in different currencies. WorkChain and Aworker are great automated payroll systems based on Blockchain which offers employees to receive their pay when they complete the work assigned. These platforms are multichain-validated data processing platforms that work in peer-to-peer networks.

Asset management is a critical task for any logistics company. What is the status of the material when it is in inventory when it arrived and where would it move? The asset data needs to be accurate and must not be alterable. Giant shipping companies are working on Blockchain systems to track the shipping containers that are roaming in the world. Currently, Blockchain systems are developed by IBM and Digital Asset which would host a chain of data than the traditional asset management tools.

Companies that store, manage and ship valuable assets are enabling the Blockchain security standards and records. With the increased business volume, traditional software is not enough to handle records when there are multiple edits and updates. Blockchain will transform this industry.

Top management and operations are the most crucial departments of any organization. Blockchain is helping in streamlining internal operations and reducing friction in sharing business-critical information. Blockchain is allowing management employees to prepare a private and shared ledger with historical versions to maintain the authenticity and transparency of information.

This way businesses are implementing a level of trust to achieve the goals and targets. Blockchain is a secure and safe environment for management and operations to share confidential information with other departments in the company and different offices in other countries.

Blockchain is building the next generation of contract management. Digital contracts are the bridge between two organizations. Blockchain is providing a new infra to do streamlined business. Start contacts are shared blockchain databases. For example, Accenture has developed a unified solution for businesses to sign blockchain-enabled smart contracts. The contract is secure and can be revised, changed and captures all changes in a ledger. For each change, it generates a notification and is shared with transparency. Blockchain enables all participating parties to have a shared ledger of all activities in the contract. Final contact can be stored in one place and have all recorded versions and activities to maintain transparency.

Contracts are highly usable when two business parties come to meet common conditions and rules to be followed. Smart contracting solutions based on Blockchain technology are improving how businesses process contracts. Contracts need terms, additional supporting documents, proofs, number of revisions and shared between parties. Blockchain enables technology enabling several advantages to businesses and organizations in signing complex contracts.

Blockchain technology has several potentials that can transform the foundation and structure of global industries. International industries will significantly adopt this new technology and this revolution will change the scenario in data handling, security, and records management. Businesses must follow innovation. Blockchain and digital currencies will make business processes faster, more secure, and more efficient to build a strong economy.

Piyush Jain is the founder and CEO of Simpalm, a mobile app development company in the USA. Piyush founded Simpalm in 2009 and has grown it to be a leading mobile and web development company in the DMV area. With a Ph.D. from Johns Hopkins and a strong background in technology and entrepreneurship, he understands how to solve problems using technology. Under his leadership, Simpalm has delivered 350+ mobile apps and web solutions to clients in startups, enterprises, and the federal sector.

View post:

How Blockchain Technology is Transforming the Business ... - Data Science Central

Read More..

Data at Heart Rhythm 2023 Highlight Key Boston Scientific Therapies – PR Newswire

Additional real-world outcomes further demonstrate safety, efficacy and procedural reproducibility of the FARAPULSE Pulsed Field Ablation System*

Results from global trial of the POLARx Cryoablation System* meet safety and effectiveness endpoints

MARLBOROUGH, Mass., May 20, 2023 /PRNewswire/ -- Boston Scientific Corporation (NYSE: BSX) today announced data supporting use of the company's key electrophysiology and cardiac rhythm management therapies, and the WATCHMAN FLX Left Atrial Appendage Closure (LAAC) Device. All data were presented at Heart Rhythm 2023, the annual meeting of the Heart Rhythm Society, held in New Orleans from May 19-21.

Real-world outcomes from the EU-PORIA registry of the FARAPULSE Pulsed Field Ablation (PFA) SystemReal-world outcomes from the multi-center EU-PORIA registry were highlighted in a late-breaking data presentation, further demonstrating the safety, efficacy and learning curve characteristics of the FARAPULSE PFA System. The registry data included favorable single procedure success rates, along with efficient procedure times in a broad patient population. More than 1,200 patients with paroxysmal or persistent atrial fibrillation (AF)** were enrolled and treated at seven high-volume European centers.

Key findings from the registry:

Primary results of the FROZEN-AF IDE trial with the POLARx Cryoablation SystemResults from the global, prospective, non-randomized, single-arm FROZEN-AF IDE study of the POLARx Cryoablation System met the safety and effectiveness endpoints of the trial. The study, which examined use of the device for the treatment of patients with paroxysmal, or intermittent atrial fibrillation (AF), included an extension arm for the POLARx FIT Cryoballoon Catheter, a single device capable of enabling 28 and 31mm sizes. The extension arm sub-study also achieved its safety and effectiveness endpoints and included 50 patients who were treated with at least one application of the 31mm cryoballoon and will be followed for 12 months. At the time of data release, patients had undergone six out of a total of 12 months of follow up.

Key findings from the trial:

Effects of the EMBLEM MRI Subcutaneous Implantable Defibrillator (S-ICD) on tricuspid regurgitationData from a secondary analysis of the investigator-sponsored, randomized ATLAS trial compared among nearly 450 patients the severity of tricuspid regurgitation at six months following the implantation of a transvenous implantable cardioverter-defibrillator (TV-ICD) versus the EMBLEM MRI S-ICD. Tricuspid regurgitation is a disease that occurs when the tricuspid valve does not close properly and is a risk factor for heart failure.

Key findings from the analysis:

Hybrid strategy for secondary prevention of sudden cardiac death using ventricular tachycardia (VT) ablation and the EMBLEM MRI S-ICDThe prospective, investigator-sponsored VTabl-SICD trial explored among 32 patients the safety and efficacy of a novel hybrid management strategy combining VT ablation with S-ICD implantation in patients who have scar-related VT. Findings from the study suggested that the combination strategy was superior to conventional TV-ICD implantation for the secondary prevention of sudden cardiac death by significantly reducing the need to deliver ICD therapy and avoiding untreated, symptomatic arrhythmias.

Notable developments for the WATCHMAN FLX LAAC DeviceData presented from two new sub-analyses of the SURPASS study out of the National Cardiovascular Data Registry (NCDR) LAAO Registry provided insights into real-world treatment strategies with the WATCHMAN FLX LAAC Device. The first analysis assessed outcomes with different post-procedural antithrombotic therapies and demonstrated that patients treated with direct oral anticoagulants (DOAC) alone had the lowest risk of major adverse events in comparison to other drug regimens following the implant. The second analysis demonstrated that concomitant catheter ablation and LAAC with the WATCHMAN FLX device was safe and had similar outcomes when compared to device implantation alone.

In addition, the latest preclinical data for the investigational WATCHMAN FLX Pro LAAC Device demonstrated that its new thromboresistant coating may further reduce the risk of device-related thrombus and result in faster and more uniform tissue coverage on the device at 45 days post implant. The findings were also published in JACC Clinical Electrophysiology.

"The data shared at this year's Heart Rhythm meeting showcases the breadth and depth of our cardiology therapies, which spans from diagnosis to treatment of cardiac disease, and highlights the continued growth of our portfolio," said Kenneth Stein, M.D., senior vice president and global chief medical officer, Boston Scientific. "From preclinical data to real-world surveillance, data demonstrated positive outcomes for our FARAPULSE PFA System, the POLARx Cryoablation System, the EMBLEM S-ICD System as well as our WATCHMAN FLX LAAC device, and is evidence of our commitment to providing physicians with innovative technologies that make a meaningful impact on the lives of patients living with heart disease."

About Boston ScientificBoston Scientific transforms lives through innovative medical solutions that improve the health of patients around the world. As a global medical technology leader for more than 40 years, we advance science for life by providing a broad range of high performance solutions that address unmet patient needs and reduce the cost of healthcare. For more information, visit http://www.bostonscientific.com and connect on Twitter and Facebook.

Cautionary Statement Regarding Forward-Looking Statements

This press release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934. Forward-looking statements may be identified by words like "anticipate," "expect," "project," "believe," "plan," "estimate," "intend" and similar words. These forward-looking statements are based on our beliefs, assumptions and estimates using information available to us at the time and are not intended to be guarantees of future events or performance. These forward-looking statements include, among other things, statements regarding clinical trials; our business plans and product performance and impact and new and anticipated product approvals and launches. If our underlying assumptions turn out to be incorrect, or if certain risks or uncertainties materialize, actual results could vary materially from the expectations and projections expressed or implied by our forward-looking statements. These factors, in some cases, have affected and in the future (together with other factors) could affect our ability to implement our business strategy and may cause actual results to differ materially from those contemplated by the statements expressed in this press release. As a result, readers are cautioned not to place undue reliance on any of our forward-looking statements.

Factors that may cause such differences include, among other things: future economic, competitive, reimbursement and regulatory conditions; new product introductions; demographic trends; intellectual property; litigation; financial market conditions; manufacturing, distribution and supply chain disruptions and cost increases; and future business decisions made by us and our competitors. All of these factors are difficult or impossible to predict accurately and many of them are beyond our control. For a further list and description of these and other important risks and uncertainties that may affect our future operations, see Part I, Item 1A Risk Factors in our most recent Annual Report on Form 10-K filed with the Securities and Exchange Commission, which we may update in Part II, Item 1A Risk Factors in Quarterly Reports on Form 10-Q we have filed or will file hereafter. We disclaim any intention or obligation to publicly update or revise any forward-looking statements to reflect any change in our expectations or in events, conditions or circumstances on which those expectations may be based, or that may affect the likelihood that actual results will differ from those contained in the forward-looking statements. This cautionary statement is applicable to all forward-looking statements contained in this document.

CONTACTS:Steve BaileyMedia Relations(651) 582-4343 (office)[emailprotected]

Lauren TenglerInvestor Relations(508) 683-4479[emailprotected]

*Caution: Investigational Device. Limited by Federal (or US) law to investigational use only. Not available for sale.** Use of FARAPULSE in persistent AF patients is outside labeled indications.

SOURCE Boston Scientific Corporation

Read the original here:

Data at Heart Rhythm 2023 Highlight Key Boston Scientific Therapies - PR Newswire

Read More..

Why AI’s diversity crisis matters, and how to tackle it – Nature.com

Inclusivity groups focus on promoting diverse builders for future artificial-intelligence projects.Credit: Shutterstock

Artificial intelligence (AI) is facing a diversity crisis. If it isnt addressed promptly, flaws in the working culture of AI will perpetuate biases that ooze into the resulting technologies, which will exclude and harm entire groups of people. On top of that, the resulting intelligence will be flawed, lacking varied social-emotional and cultural knowledge.

In a 2019 report from New York Universitys AI Now Institute, researchers noted that more than 80% of AI professors were men. Furthermore, Black individuals made up just 2.5% of Google employees and 4% of those working at Facebook and Microsoft. In addition, the report authors noted that the overwhelming focus on women in tech when discussing diversity issues in AI is too narrow and likely to privilege white women over others.

Some researchers are fighting for change, but theres also a culture of resistance to their efforts. Beneath this veneer of oh, AI is the future, and we have all these sparkly, nice things, both AI academia and AI industry are fundamentally conservative, says Sabine Weber, a scientific consultant at VDI/VDE Innovation + Technik, a technology consultancy headquartered in Berlin. AI in both sectors is dominated by mostly middle-aged white men from affluent backgrounds. They are really attached to the status quo, says Weber, who is a core organizer of the advocacy group Queer in AI. Nature spoke to five researchers who are spearheading efforts to change the status quo and make the AI ecosystem more equitable.

Senior data science manager at Shopify in Atlanta, Georgia, and a general chair of the 2023 Deep Learning Indaba conference.

I am originally from Ghana and did my masters in statistics at the University of Akron in Ohio in 2011. My background is in using machine learning to solve business problems in customer-experience management. I apply my analytics skills to build models that drive customer behaviour, such as customer-targeting recommendation systems, aspects of lead scoring the ranking of potential customers, prioritizing which ones to contact for different communications and things of that nature.

This year, Im also a general chair for the Deep Learning Indaba, a meeting of the African machine-learning and AI community that is held in a different African country every year. Last year, it was held in Tunisia. This year, it is taking place in Ghana in September.

Our organization is built for all of Africa. Last year, 52 countries participated. The goal is to have all 54 African countries represented. Deep Learning Indaba empowers each country to have a network of people driving things locally. We have the flagship event, which is the annual conference, and country-specific IndabaX events (think TED and TEDx talks).

During Ghanas IndabaX conferences, we train people in how to program and how to deal with different kinds of data. We also do workshops on what is happening in the industry outside of Ghana and how Ghana should be involved. IndabaX provides funding and recommends speakers who are established researchers working for companies such as Deep Mind, Microsoft and Google.

To strengthen machine learning and AI and inclusion in Ghana, we need to build capacity by training young researchers and students to understand the skill sets and preparation they need to excel in this field. The number one challenge we face is resources. Our economic status is such that the focus of the government and most Ghanaians is on peoples daily bread. Most Ghanaians are not even thinking about technological transformation. Many local academics dont have the expertise to teach the students, to really ground them in AI and machine learning.

Most of the algorithms and systems we use today were created by people outside Africa. Africas perspective is missing and, consequently, biases affect Africa. When we are doing image-related AI, there arent many African images available. African data points make up no more than 1% of most industry machine-learning data sets.

When it comes to self-driving cars, the US road network is nice and clean, but in Africa, the network is very bumpy, with a lot of holes. Theres no way that a self-driving car trained on US or UK roads could actually work in Africa. We also expect that using AI to help diagnose diseases will transform peoples lives. But this will not help Africa if people are not going there to collect data, and to understand African health care and related social-support systems, sicknesses and the environment people live in.

Today, African students in AI and machine learning must look for scholarships and leave their countries to study. I want to see this change and I hope to see Africans involved in decision-making, pioneering huge breakthroughs in machine learning and AI research.

Researchers outside Africa can support African AI by mentoring and collaborating with existing African efforts. For example, we have Ghana NLP, an initiative focused on building algorithms to translate English into more than three dozen Ghanaian languages. Global researchers volunteering to contribute their skill set to African-specific research will help with efforts like this. Deep Learning Indaba has a portal in which researchers can sign up to be mentors.

Maria Skoularidou has worked to improve accessibility at a major artificial-intelligence conference. Credit: Maria Skoularidou

PhD candidate in biostatistics at the University of Cambridge, UK, and founder and chair of {Dis}Ability in AI.

I founded {Dis}Ability in AI in 2018, because I realized that disabled people werent represented at conferences and it didnt feel right. I wanted to start such a movement so that conferences could be inclusive and accessible, and disabled people such as me could attend them.

That year, at NeurIPS the annual conference on Neural Information Processing Systems in Montreal, Canada, at least 4,000 people attended and I couldnt identify a single person who could be categorized as visibly disabled. Statistically, it doesnt add up to not have any disabled participants.

I also observed many accessibility issues. For example, I saw posters that were inconsiderate with respect to colour blindness. The place was so crowded that people who use assistive devices such as wheelchairs, white canes or service dogs wouldnt have had room to navigate the poster session. There were elevators, but for somebody with limited mobility, it would not have been easy to access all the session rooms, given the size of the venue. There were also no sign-language interpreters.

Since 2019, {Dis}Ability in AI has helped facilitate better accessibility at NeurIPS. There were interpreters, and closed captioning for people with hearing problems. There were volunteer escorts for people with impaired mobility or vision who requested help. There were hotline counsellors and silent rooms because large conferences can be overwhelming. The idea was: this is what we can provide now, but please reach out in case we are not considerate with respect to something, because we want to be ethical, fair, equal and honest. Disability is part of society, and it needs to be represented and included.

Many disabled researchers have shared their fears and concerns about the barriers they face in AI. Some have said that they wouldnt feel safe sharing details about their chronic illness, because if they did so, they might not get promoted, be treated equally, have the same opportunities as their peers, be given the same salary and so on. Other AI researchers who reached out to me had been bullied and felt that if they spoke up about their condition again, they could even lose their jobs.

People from marginalized groups need to be part of all the steps of the AI process. When disabled people are not included, the algorithms are trained without taking our community into account. If a sighted person closes their eyes, that does not make them understand what a blind person must deal with. We need to be part of these efforts.Being kind is one way that non-disabled researchers can make the field more inclusive. Non-disabled people could invite disabled people to give talks or be visiting researchers or collaborators. They need to interact with our community at a fair and equal level.

William Agnew is a computer science PhD candidate at the University of Washington in Seattle. Sabine Weber is a scientific consultant at VDI/VDE Innovation + Technik in Erfurt, Germany. They are organizers of the advocacy organization Queer in AI.

Agnew: I helped to organize the first Queer in AI workshop for NeurIPS in 2018. Fundamentally, the AI field doesnt take diversity and inclusion seriously. Every step of the way, efforts in these areas are underfunded and underappreciated. The field often protects harassers.

Most people doing the work in Queer in AI are graduate students, including me. You can ask, Why isnt it the senior professor? Why isnt it the vice-president of whatever? The lack of senior members limits our operation and what we have the resources to advocate for.

The things we advocate for are happening from the bottom up. We are asking for gender-neutral toilets; putting pronouns on conference registration badges, speaker biographies and in surveys; opportunities to run our queer-AI experiences survey, to collect demographics, experiences of harm and exclusion, and the needs of the queer AI community; and we are opposing extractive data policies. We, as a bunch of queer people who are marginalized by their queerness and who are the most junior people in our field, must advocate from those positions.

In our surveys, queer people consistently name the lack of community, support and peer groups as their biggest issues that might prevent them from continuing a career path in AI. One of our programmes gives scholarships to help people apply to graduate school, to cover the fees for applications, standardized admissions tests, such as the Graduate Record Examination (GRE) and university transcripts. Some people must fly to a different country to take the GRE. Its a huge barrier, especially for queer people, who are less likely to have financial support from their families and who experience repressive legal environments. For instance, US state legislatures are passing anti-trans and anti-queer laws affecting our membership.

In large part because of my work with Queer in AI, I switched from being a roboticist to being an ethicist. How queer peoples data are used, collected and misused is a big concern. Another concern is that machine learning is fundamentally about categorizing items and people and predicting outcomes on the basis of the past. These things are antithetical to the notion of queerness, where identity is fluid and often changes in important and big ways, and frequently throughout life. We push back and try to imagine machine-learning systems that dont repress queerness.

You might say: These models dont represent queerness. Well just fix them. But queer people have long been the targets of different forms of surveillance aimed at outing, controlling or suppressing us, and a model that understands queer people well can also surveil them better. We should avoid building technologies that entrench these harms, and work towards technologies that empower queer communities.

Weber: Previously, I worked as an engineer at a technology company. I said to my boss that I was the only person who was not a cisgender dude in the whole team of 60 or so developers. He replied, You were the only person who applied for your job who had the qualification. Its so hard to find qualified people.

But companies clearly arent looking very hard. To them it feels like: Were sitting on high. Everybody comes to us and offers themselves. Instead, companies could recruit people at queer organizations, at feminist organizations. Every university has a women in science, technology, engineering and mathematics (STEM) group or women in computing group that firms could easily go to.

But the thinking, Thats how we have always done it; dont rock the boat, is prevalent. Its frustrating. Actually, I really want to rock the boat, because the boat is stupid. Its such a disappointment to run up against these barriers.

Laura Montoya encourages those who, like herself, came to the field of artificial intelligence through a non-conventional route. Credit: Tim McMacken Jr (tim@accel.ai)

Executive director of the Accel.AI Institute and LatinX in AI in San Francisco, California.

In 2016, I started the Accel.AI Institute as an education company that helps under-represented or underserved people in AI. Now, its a non-profit organization with the mission of driving AI for social impact initiatives. I also co-founded the LatinX in AI programme, a professional body for people of Latin American background in the field. Im first generation in the United States, because my family emigrated from Colombia.

My background is in biology and physical science. I started my career as a software engineer, but conventional software engineering wasnt rewarding for me. Thats when I found the world of machine learning, data science and AI. I investigated the best way to learn about AI and machine learning without going to graduate school. Ive always been an alternative thinker.

I realized there was a need for alternative educational options for people like me, who dont take the typical route, who identify as women, who identify as people of colour, who want to pursue an alternative path for working with these tools and technologies.

Later on, while attending large AI and machine-learning conferences, I met others like myself, but we made up a small part of the population. I got together with these few friends to brainstorm, How can we change this?. Thats how LatinX in AI was born. Since 2018, weve launched research workshops at major conferences, and hosted our own call for papers in conjunction with NeurIPS.

We also have a three-month mentorship programme to address the brain drain resulting from researchers leaving Latin America for North America, Europe and Asia. More senior members of our community and even allies who are not LatinX can serve as mentors.

In 2022, we launched our supercomputer programme, because computational power is severely lacking in much of Latin America. For our pilot programme, to provide research access to high-performance computing resources at the Guadalajara campus of the Monterey Institute of Technology in Mexico, the technology company NVIDIA, based in Santa Clara, California, donated a DGX A100 system essentially a large server computer. The government agency for innovation in the Mexican state of Jalisco will host the system. Local researchers and students can share access to this hardware for research in AI and deep learning. We put out a global call for proposals for teams that include at least 50% Latinx members who want to use this hardware, without having to be enrolled at the institute or even be located in the Guadalajara region.

So far, eight teams have been selected to take part in the first cohort, working on projects that include autonomous driving applications for Latin America and monitoring tools for animal conservation. Each team gets access to one graphics processing unit, or GPU which is designed to handle complex graphics and visual-data processing tasks in parallel for the period of time they request. This will be an opportunity for cross-collaboration, for researchers to come together to solve big problems and use the technology for good.

See the original post here:

Why AI's diversity crisis matters, and how to tackle it - Nature.com

Read More..

A high school science project that seeks to help prevent suicide – NPR

If you or someone you know may be considering suicide, contact the 988 Suicide & Crisis Lifeline by calling or texting 9-8-8, or the Crisis Text Line by texting HOME to 741741.

Text messages, Instagram posts and TikTok profiles. Parents often caution their kids against sharing too much information online, weary about how all that data gets used. But one Texas high schooler wants to use that digital footprint to save lives.

Siddhu Pachipala is a senior at The Woodlands College Park High School, in a suburb outside Houston. He's been thinking about psychology since seventh grade, when he read Thinking, Fast and Slow by psychologist Daniel Kahneman.

Concerned about teen suicide, Pachipala saw a role for artificial intelligence in detecting risk before it's too late. In his view, it takes too long to get kids help when they're suffering.

Early warning signs of suicide, like persistent feelings of hopelessness, changes in mood and sleep patterns, are often missed by loved ones. "So it's hard to get people spotted," says Pachipala.

For a local science fair, he designed an app that uses AI to scan text for signs of suicide risk. He thinks it could, someday, help replace outdated methods of diagnosis.

"Our writing patterns can reflect what we're thinking, but it hasn't really been extended to this extent," he said.

The app won him national recognition, a trip to D.C., and a speech on behalf of his peers. It's one of many efforts under way to use AI to help young people with their mental health and to better identify when they're at risk.

Experts point out that this kind of AI, called natural language processing, has been around since the mid-1990s. And, it's not a panacea. "Machine learning is helping us get better. As we get more and more data, we're able to improve the system," says Matt Nock, a professor of psychology at Harvard University, who studies self-harm in young people. "But chat bots aren't going to be the silver bullet."

Colorado-based psychologist Nathaan Demers, who oversees mental health websites and apps, says that personalized tools like Pachipala's could help fill a void. "When you walk into CVS, there's that blood pressure cuff," Demers said. "And maybe that's the first time that someone realizes, 'Oh, I have high blood pressure. I had no idea.' "

He hasn't seen Pachipala's app but theorizes that innovations like his raise self-awareness about underlying mental health issues that might otherwise go unrecognized.

Building SuiSensor

Pachipala set himself to designing an app that someone could download to take a self-assessment of their suicide risk. They could use their results to advocate for their care needs and get connected with providers. After many late nights spent coding, he had SuiSensor.

Siddhu Pachipala Chris Ayers Photography/Society for Science hide caption

Using sample data from a medical study, based on journal entries by adults, Pachipala said SuiSensor predicted suicide risk with 98% accuracy. Although it was only a prototype, the app could also generate a contact list of local clinicians.

In the fall of his senior year of high school, Pachipala entered his research into the Regeneron Science Talent Search, an 81-year-old national science and math competition.

There, panels of judges grilled him on his knowledge of psychology and general science with questions like: "Explain how pasta boils. ... OK, now let's say we brought that into space. What happens now?" Pachipala recalled. "You walked out of those panels and you were battered and bruised, but, like, better for it."

He placed ninth overall at the competition and took home a $50,000 prize.

The judges found that, "His work suggests that the semantics in an individual's writing could be correlated with their psychological health and risk of suicide." While the app is not currently downloadable, Pachipala hopes that, as an undergraduate at MIT, he can continue working on it.

"I think we don't do that enough: trying to address [suicide intervention] from an innovation perspective," he said. "I think that we've stuck to the status quo for a long time."

Current AI mental health applications

How does his invention fit into broader efforts to use AI in mental health? Experts note that there are many such efforts underway, and Matt Nock, for one, expressed concerns about false alarms. He applies machine learning to electronic health records to identify people who are at risk for suicide.

"The majority of our predictions are false positives," he said. "Is there a cost there? Does it do harm to tell someone that they're at risk of suicide when really they're not?"

And data privacy expert Elizabeth Laird has concerns about implementing such approaches in schools in particular, given the lack of research. She directs the Equity in Civic Technology Project at the Center for Democracy & Technology (CDT).

While acknowledging that "we have a mental health crisis and we should be doing whatever we can to prevent students from harming themselves," she remains skeptical about the lack of "independent evidence that these tools do that."

All this attention on AI comes as youth suicide rates (and risk) are on the rise. Although there's a lag in the data, the Centers for Disease Control and Prevention (CDC) reports that suicide is the second leading cause of death for youth and young adults ages 10 to 24 in the U.S.

Efforts like Pachipala's fit into a broad range of AI-backed tools available to track youth mental health, accessible to clinicians and nonprofessionals alike. Some schools are using activity monitoring software that scans devices for warning signs of a student doing harm to themselves or others. One concern though, is that once these red flags surface, that information can be used to discipline students rather than support them, "and that that discipline falls along racial lines," Laird said.

According to a survey Laird shared, 70% of teachers whose schools use data-tracking software said it was used to discipline students. Schools can stay within the bounds of student record privacy laws, but fail to implement safeguards that protect them from unintended consequences, Laird said.

"The conversation around privacy has shifted from just one of legal compliance to what is actually ethical and right," she said. She points to survey data that shows nearly 1 in 3 LGBTQ+ students report they've been outed, or know someone who has been outed, as a consequence of activity monitoring software.

Matt Nock, the Harvard researcher, recognizes the place of AI in crunching numbers. He uses machine learning technology similar to Pachipala's to analyze medical records. But he stresses that much more experimentation is needed to vet computational assessments.

"A lot of this work is really well-intended, trying to use machine learning, artificial intelligence to improve people's mental health ... but unless we do the research, we're not going to know if this is the right solution," he said.

More students and families are turning to schools for mental health support. Software that scans young peoples' words, and by extension thoughts, is one approach to taking the pulse on youth mental health. But, it can't take the place of human interaction, Nock said.

"Technology is going to help us, we hope, get better at knowing who is at risk and knowing when," he said. "But people want to see humans; they want to talk to humans."

See the article here:

A high school science project that seeks to help prevent suicide - NPR

Read More..

Scepter, ExxonMobil Team With AWS To Address Methane … – Society of Petroleum Engineers

Scepter and ExxonMobil are working with Amazon Web Services (AWS) to develop a data-analytics platform to characterize and quantify methane emissions initially in the US Permian Basin from various monitoring platforms that operate from the ground, in the air, and from space, with the potential for global deployment in the near future. This collaboration has the potential to redefine methane detection and mitigation efforts and will contribute to broader satellite-based emission reduction efforts across a dozen industries, including energy, agriculture, manufacturing, and transportation. Rapidly reducing methane emissions is regarded as the single most effective strategy to reduce global warming in the near term and keep the goal of limiting warming to 1.5C within reach.

According to the International Energy Agency, methane is responsible for approximately 30% of the rise in global temperatures since the Industrial Revolution, making it the second-largest contributor to climate change behind carbon dioxide. Methane is released during oil and gas production processes, and the industry accounts for about a quarter of the global anthropogenic methane emitted into the atmosphere. That makes the Permian Basin, among the largest oil- and gas-producing regions in the world, ripe for methane monitoring and mitigation.

Scepter, which specializes in using global Earth- and space-based data to measure air pollution in real time, has been working with ExxonMobil to optimize sensors that low-Earth orbit satellites forming a constellation by 2026 to enable real-time, continuous monitoring of methane emissions from oil and gas operations on a global scale. As part of this effort, the companies are conducting stratospheric balloon missions to test the technology in high-altitude conditions. Bringing in AWS is an important next step to develop a fusion and analytics platform that can integrate and analyze methane emissions data from a spectrum of detection capabilities operating across different layers, to eventually include satellites.

We will be processing very large amounts of emissions data covering the most prolific oil and gas basin in the US that has made the United States the worlds top energy producer, said Scepter CEO and founder Philip Father.

Advanced AWS cloud services make it possible to rapidly synthesize and analyze information from multiple data sources and are a perfect choice to help Scepter achieve its goal of helping customers reduce methane emissions, said Clint Crosier, director of aerospace and satellite at AWS.

While Scepter developed the data fusion platform, a comprehensive portfolio of AWS cloud services are helping Scepter process and aggregate large amounts of data captured by the multilayered system of methane emission detection technologies. For example, AWS Lambda enables efficient and cost effective serverless processing of large data sets, and Amazon API Gateway ingests data from multiple sources. These capabilities will allow Scepter to pinpoint emission events more precisely and quantify emissions for customers such as ExxonMobil to enable more rapid and effective mitigation. The relationship with AWS will allow Scepter to boost its atmospheric data fusion capabilities significantly to help not only oil and gas companies in monitoring for methane, but also other industries such as agriculture, waste management, health care, retail, and transportation to monitor CO2 and air particulates.

Technology solutions are essential to reduce methane emissions globally, said Sam Perkins, ExxonMobil Unconventional Technology Portfolio manager. ExxonMobil is at the forefront of the development and deployment of new state-of-the-art detection technologies as we continue to expand our aggressive continuous methane monitoring program. This collaboration will enable us to further scale and enhance methane emission detection capabilities while also having the potential to support similar efforts in the industry.

Go here to see the original:

Scepter, ExxonMobil Team With AWS To Address Methane ... - Society of Petroleum Engineers

Read More..