Page 1,630«..1020..1,6291,6301,6311,632..1,6401,650..»

Toward an Inclusive Artificial Intelligence: The Ms. Q&A With … – Ms. Magazine

When presented with two sets of questions, one generated by ChatGPTa new form of artificial intelligence that generates human-like conversational textand the other by a team of writers, data scientist and Women in Data Science Conference speaker Gabriela de Queiroz quickly identified the correct authors of each. Asked how she had so swiftly distinguished between the AI-generated text and the human text, de Queiroz observed that the ChatGPT text was generic and formulaic, while the text produced by human hands was more creative and individualistic.

Media coverage of new technologies often present these technologies as inevitable and self-generating, rather than products made by human hands. As such, new technologies appear so all-encompassing, they seem impossible to challenge or change once they have been implemented.

Media headlines are rife with dire predictions about ChatGPT and the future of AI: Will it make writers obsolete? Will it lead to rampant cheating? Will it rewire our brains? Will it prey on the vulnerable? But when de Queiroz talks about her efforts to make artificial intelligence more inclusive, she takes a different approach to understanding the ever more influential and pervasive role of AI in contemporary societies.

Hailing from Brazil, and as woman in a field dominated by men, de Queiroz has reason to be skeptical of AI. Gender equity and minority inclusion in tech fields are uphill battles for leaders interested in equity and belonging like de Queirozand data science is no exception.

Currently, de Queiroz works as a principal cloud advocate at Microsoft where she leads the global education advocacy team. This teams mission is to welcome, guide and connect the future generation of student developers and support them in thriving in a professional setting.

For de Queiroz, diversity is central to preventing harmful consequences and promoting fairness in new systems.

A diverse team of developers has different backgrounds and different experiences, she said. They can bring their unique ethical considerations and insights about potential unintended consequences of AI systems to the design and development of new technologies.

In light of the apocryphal media coverage of AI, it is understandable that many people see language models, like ChatGPT and other new machine learning technologies like Metas Make-A-Video, as the beginning of the end. Yet when discussing these challenges, de Queiroz remains optimistic regarding the future (and current) role that machine learning can play in our lives.

She has already seen some positive effects of ChatGPT in her daily work. Not having English as her first language, de Queiroz has been using ChatGPT for language translation, which saves her having to use multiple tools, as well as time.

I dont need to spend an hour [translating], de Queiroz said. Maybe I now spend 20 minutes.

For de Queiroz, AI like ChatGPT has the potential to be a powerful resource and a positive force in our worldprovided we recognize its limitations and work to address them.

This belief that the power of AI can be used positively, and that including diverse perspectives in its development is essential to doing that, is what inspired de Queiroz to found AI Inclusive, a worldwide organization that promotes diversity in the AI community and addresses bias and gender disparities present in AI.

One of the primary problems is who has access to these systems throughout the world, said de Queiroz. You need, at least, a computer. You need to use the Internet. And some communities dont have these basic forms of access.

Even if communities that do have access to the necessary tools and internet, the culture of male-dominated coding and other online communities has often been inhospitable to women. To help address this problem, in 2012, de Queiroz founded R-Ladies, a world-wide organization dedicated to promoting gender diversity in the R statistical programming language community. Today, R-Ladies has over 100,000 members in over 60 countries, and its total outreach is even largera refreshing example of what can be achieved when advocates work together to build networks and communities.

ChatGPT and other machine learning systems are going to be scaled to be used for a wide variety of applications in the future. Indeed, in many cases, they already are. However, de Queiroz said, advances in artificial intelligence do not necessarily equate to a phasing-out of human involvement. In fact, these advancements require deliberate and collaborative involvement on the part of all those who may have been left out of conversations about AI in the past.

On March 8, 2023, de Queiroz and other women leaders in data science convened at Stanford Universitys Women in Data Science Conference (WiDS)a globally recognized conference focused on elevating women in the field of data science by providing inspiration, education, community and support. De Queirozs talk, Embrace the Journey: Learnings and Inspiration From a Non-Linear Path into Data Science, was geared toward those just starting out in data science. Other WiDS speakers highlighted the work still needed on inclusivity and bias before data science and AI technologies (like ChatGPT) can reach their full potential.

When we asked ChatGPT what needs to be done to improve AI, data quality and diversity were the first items on its list. Even the machines know that inclusive AI should be our priority.

Up next:

U.S. democracy is at a dangerous inflection pointfrom the demise of abortion rights, to a lack of pay equity and parental leave, to skyrocketing maternal mortality, and attacks on trans health. Left unchecked, these crises will lead to wider gaps in political participation and representation. For 50 years, Ms. has been forging feminist journalismreporting, rebelling and truth-telling from the front-lines, championing the Equal Rights Amendment, and centering the stories of those most impacted. With all thats at stake for equality, we are redoubling our commitment for the next 50 years. In turn, we need your help, Support Ms. today with a donationany amount that is meaningful to you. For as little as $5 each month, youll receive the print magazine along with our e-newsletters, action alerts, and invitations to Ms. Studios events and podcasts. We are grateful for your loyalty and ferocity.

Visit link:

Toward an Inclusive Artificial Intelligence: The Ms. Q&A With ... - Ms. Magazine

Read More..

Industrial Engineering Professor Named John L. Imhoff Chair in … – University of Arkansas Newswire

Ashley Reeves

Xiao Liu and plaque honoring the late professor John L. Imhoff

Xiao Liu, assistant professor of industrial engineering, has been named the John L. Imhoff Chair in Industrial Engineering for a two-year period. He is the ninth person to hold this title.

Liu joined the Department of Industrial Engineering in 2017 from the IBM Thomas J. Watson Research Center in Yorktown Heights, New York. His research focuses on the integration of governing physics and domain knowledge into data-driven models for multidisciplinary applications and is supported by the National Science Foundation and industry.

Department head Ed Pohl stated that Liu is the perfect faculty member for the John L. Imhoff Chair. "He excels in the classroom, he excels in his research, he is an outstanding mentor to his students, and he is passionate about making a global impact with his research. We are extremely fortunate to have Dr. Liu as a member of team, and he is very deserving of this honor," he said.

In 2022, Liu was the recipient of a CAREER Award from the National Science Foundation's Faculty Early Career Development Program. The award supports early-career faculty who have the potential to serve as academic role models in research and education and to lead advances in the mission of their department or organization. The award comes with a grant Liu will use to further his research in domain-aware statistical learning. Liu was also honored with the College of Engineering Rising Star award in 2022.

Liu responded to the appointment by saying, "Dr. John L. Imhoff thrived on the global impact potential of the industrial engineering discipline. This support will fulfill Dr. Imhoff's vision by creating additional international education activities for our students, including summer visiting programs, attending international workshops and domestic conferences."

The John L. Imhoff Chair in Industrial Engineering was established in 1989 by John L. Imhoff's family, friends, former students and colleagues. The chair serves to reward faculty members who embody Imhoff's passion for global studies and its importance to student growth in the profession, commitment to student success, enthusiasm for teaching and excellence in research.

Learn more about Xiao Liu's research here.

About the Department of Industrial Engineering: The Department of Industrial Engineering was founded in 1950, led by department head John L. Imhoff, who believed deeply in the global impact of industrial engineering. Today, the department averages over 200 undergraduate students (sophomore through senior) and over 40 doctoral and master's students. In addition, the department has three online master's degrees: the Master of Science in Operations Management, Master of Science in Engineering Management and Master of Science in Operations Analytics. These three programs alone enroll over 2,000 students each academic year. A wide range of practical research is conducted in the department in the areas of reliability, maintainability and quality engineering; transportation, logistics and distribution; health care systems engineering; manufacturing and automation; engineering management and big data; and data analytics in each of the five research centers housed in the department. To learn more about the Department of Industrial Engineering please visit our website.

See more here:

Industrial Engineering Professor Named John L. Imhoff Chair in ... - University of Arkansas Newswire

Read More..

Avaloq, an NEC Company: Results Show Investors Are Ready for AI … – Yahoo Finance

Tokyo, Japan--(Newsfile Corp. - March 20, 2023) - NEC Corporation (TSE: 6701) (OTC Pink: NIPNF) shares a recent survey conducted by Avaloq, an NEC Company and leading financial services provider, stating that nearly 80% of investors are comfortable with having artificial intelligence (AI) in a lead or supporting role when it comes to managing their portfolios.

The poll, conducted on 6000 retail, affluent, high-net-worth and ultra-high net worth individuals, also revealed 73% are open to receiving investment advice supported by AI, and 74% are willing to receive AI-assisted product recommendations.

From providing entertainment suggestions to facilitating navigation and shopping, AI has increasingly become the backbone of numerous services relied on daily by a tremendous amount of people. According to the survey results, AI is making significant headway in the banking industry and is currently considered a major disruptor in this field.

The ability of AI to sort vast quantities of data, automate repetitive processes and accurately identify outliers has proven particularly valuable in the back and middle offices of financial institutions. From anti-money laundering to payment fraud prevention, AI is helping financial institutions to not only streamline their workflows, but to increase service accuracy by minimizing the risk of human error.

Wealth management professionals are seeing increased AI adoption in the front and investment office, where they have traditionally been reluctant to incorporate AI in the advisory process, according to Gery Zollinger, Head of Data Science and Analytics at Avaloq. When looking into the attitudes of wealth management clients, he added, it's clear that the time is right for AI to play a greater role in areas such as portfolio analysis or optimization.

"The main benefits of AI technology in finance are enhanced operational efficiency, increased relationship manager productivity and improved data analysis," said Zollinger. In the following Q&A, he shares his thoughts on the present and future of AI in wealth management.

Story continues

What are the current uses of AI in the front office?

Two key areas have been identified where AI is already established in the wealth management sector. The first use case is virtual assistant technology to augment the role of the relationship manager. Smart virtual assistants can support relationship managers, for example, by providing near-instant suggestions to client requests for account statements, transfers, trade proposals, etc. This works based on natural language processing (NLP) - similar to the technology behind ChatGPT. The virtual assistant analyses client-adviser communications, understands the client's intent and suggests next best actions or relevant news items for the adviser to share. This AI support enables the relationship managers to serve a larger and more diverse client base, while ensuring quicker responses to keep clients engaged.

The second is improved client lifecycle management. Wealth managers can deploy network analytics to automate prospect mapping, while churn prediction engines can alert relationship managers when a client is at risk of leaving the firm. In our experience, these tools can enable wealth managers to increase their client acquisition rate by up to 20% while staving off client attrition.

How will AI help wealth management in the future?

Advances in NLP will help drive conversational banking - i.e. interaction with relationship managers over multiple channels - and substantially increase the productivity of bank employees. An exciting future application of this technology is the combination of NLP with a voice-to-text solution. This would enable the AI to suggest next best actions in near real time, such as generating trade ideas, to the relationship manager during client meetings or generate a summary and to-do list after the meeting has finished.

What about AI in the broader banking industry?

The biggest change on the horizon is new regulation on the use and ethics of AI. A prime example is the pending AI Act by the EU Commission, which sets out clear guidance with respect to fairness, verifiability and non-discrimination. We believe that this guidance will give decision-makers the regulatory confidence to implement value-adding AI tools to leverage their vast datasets, which will ultimately boost innovation in the financial sector. Increased client acceptance of AI is expected over time, especially as individuals become more familiar with AI-augmented tools such as ChatGPT.

Gery Zollinger

About Gery Zollinger, Head of Data Science and Analytics at Avaloq, an NEC Company

Gery Zollinger leads the team behind the Avaloq Insight product line, which is designed to embed data analytics and artificial intelligence in wealth management. Gery joined Avaloq in February 2019 from Credit Suisse, in the global Credit Risk Analytics team, where he was responsible for credit risk modelling within the Private Banking and Investment Banking divisions. Gery has worked in analytics and quantitative modelling for more than ten years. He holds a master's degree in economics from the University of Lausanne.

About Avaloq

Avaloq is a premium provider of front-to-back software and services for 160 financial institutions around the world. Avaloq's clients include private banks, wealth managers and investment managers, as well as retail and neo banks. Avaloq develops software that can be deployed flexibly through cloud-based Software as a Service (SaaS) or on-premises, and the Company offers Banking Operations outsourcing through its Business Process as a Service (BPaaS) model. Avaloq is a subsidiary of NEC Corporation, a global leader in the integration of IT and network technologies. http://www.avaloq.com

About NEC

NEC Corporation has established itself as a leader in the integration of IT and network technologies while promoting the brand statement of "Orchestrating a brighter world." NEC enables businesses and communities to adapt to rapid changes taking place in both society and the market as it provides for the social values of safety, security, fairness and efficiency to promote a more sustainable world where everyone has the chance to reach their full potential. For more information, visit NEC at https://www.nec.com.

Press Contact

If you are a member of the press and would like to feature Avaloq or speak to an Avaloq representative, please contact press@avaloq.com.

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/158277

Read the original here:

Avaloq, an NEC Company: Results Show Investors Are Ready for AI ... - Yahoo Finance

Read More..

Pluralsight study finds 72% of tech leaders plan to increase their … – PR Newswire

SILICON SLOPES, Utah, March 20, 2023 /PRNewswire/ --Pluralsight, the technology workforce development company, today released its 2023 State of Upskilling Report, which compiles survey results from more than 1,200 tech learners and leaders in the United States, the United Kingdom, Australia, and India on the most current trends and attitudes around tech skills development.

Amid economic uncertainty and downturn, organizations are leaning on their technologists to continue to innovate and drive business value. Though 65% of tech team leaders have been asked to cut costs, 72% still plan to increase their investment in tech skill development in 2023. And because upskilling existing talent is more cost-effective than hiring new employees, 97% of learning and development and HR directors say they are prioritizing internal talent over hiring for open positions.

"This year's research findings underscore the importance of maximizing employee potential and optimizing learning investments to drive business ROI," said Gary Eimerman, Chief Product Officer at Pluralsight. "Organizations and individuals alike are being asked to do more with less in the face of reduced workforces and larger economic pressures. For future-focused companies, an emphasis on continuous upskilling will help sharpen their competitive edge"

A Force Multiplier: Upskilling Amid Economic Uncertainty

The past several months have brought an onslaught of layoffs and hiring freezes across industriesespecially tech. As 65% of tech executives are being asked to look for cost efficiencies in response to economic uncertainty, the consequences have a ripple effect.

Sixty seven percent of tech managers reported that workforce reductions in their organization across software, IT, and data have resulted in their teams taking on more responsibility, while nearly half (47%) of technologists agree they have had to perform additional responsibilities outside of their primary job function.

Investing in tech skills development helps equip overwhelmed employees with the tools needed to conquer these new and unfamiliar responsibilities. More than half (52%) of technologists said it's important to learn new tech skills in times of economic turbulence, and as day-to-day responsibilities evolve and expand in response to layoffs, upskilling becomes a critical aspect of not just individual success, but organizational success.

Technology Skills Gaps in 2023

Amid these workforce challenges, the 2023 State of Upskilling Report illuminates a decrease in tech skills confidence across respondents. Last year's report found that 80% of technologists were confident they had the skills to master their current job. This year, the majority of technologists don't feel that same level of confidence across major tech skill areas. The top three skills technologists and technology managers are prioritizing to drive business value are cybersecurity, data science, and cloud.According to the report:

Lack of time and budget have remained the biggest barriers to upskilling over the past two years. And for technologists who secure the time or budget to prioritize upskilling, 30% don't know where to focus their skills development, while 25% aren't sure which resources to leverage.

With 85% of organizations actively engaged in, or planning to begin, a digital transformation project in 2023, technologists need guided learning mapped to key business outcomes. For more insights, download the full 2023 State of Upskilling Report.

About Pluralsight

Pluralsight is the leading technology workforce development company that helps companies and teams build better products by developing critical skills, improving processes and gaining insights through data, and providing strategic skills consulting. Trusted by forward-thinking companies of every size in every industry, Pluralsight helps individuals and businesses transform with technology. Pluralsight Skills helps enterprises build technology skills at scale with expert-authored courses on today's most important technologies, including cloud, artificial intelligence and machine learning, data science, and security, among others. Skills also includes tools to align skill development with business objectives, virtual instructor-led training, hands-on labs, skill assessments and one-of-a-kind analytics. Flow complements Skills by providing engineering teams with actionable data and visibility into workflow patterns to accelerate the delivery of products and services. For more information about Pluralsight, visit pluralsight.com.

Media ContactRyan Sins, Communications Manager[emailprotected]

SOURCE Pluralsight

See the article here:

Pluralsight study finds 72% of tech leaders plan to increase their ... - PR Newswire

Read More..

Heard on the Street 3/20/2023 – insideBIGDATA

Welcome to insideBIGDATAs Heard on the Street round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Lets pump the brakes on these wilder claims about AI. Commentary byBeerud Sheth, CEO atGupshup

Since the end of last year, two topics have hogged maximum limelight. One is of course ChatGPT and Generative AI technologies in general; the other is disruption these technologies are going to create in jobs, education, fields of creativity etc. With ChatGPT, weve already seen significant improvement in efficiencies and capabilities, making the dividends amply clear. But the thing with technology is, it always has a dual impact- there are winners and there are some losers as well. As a society, what we need tomake sure is that we should be able to support and enable these people to adjust and adapt to this new world, while benefiting from the overall value created. Well have to be careful to balance the commongreater good against individual wins and losses. A good way to think about AI, would be to look at it as augmenting human capability.Because no matter how powerful AI gets, humans plus AI will be even more powerful than either humans alone or AI on its own. Its quite possible that well see the rise of new jobs where people will work with AI as the partner.Think ofsomeone who wants to make an animated movie andhas just the perfect idea. But he cant sketch,animate or programme. Now, with ChatGPT he can prototype quickly, so theres no question that lots of jobs will become augmented and therefore there exists a lot of potential for new jobs that didnt exist before. The future of AI is undoubtedly bright, and it will continue to transform various industries and aspects of our lives. We can expect AI to become more sophisticated and able to perform a wider range of tasks, as it becomes more capable of learning and adapting to new environments. However, AI has its natural limits.For example, while AI can be used to diagnose medical conditions, it cannot replace the empathy and judgment of a human doctor. Similarly, while AI can analyze large data sets and make predictions, it may be limited by biases inherent in the data its trained on. Ultimately, the future of AI is likely to involve a balance between automation and human work.

Digital transformation and its effects on the hybrid workplace. Commentary by Krishna Nacha, SVP and Head of North America and Latin America at Iron Mountain

The hybrid model is here to stay, notwithstanding the fence sitters and the last few corporate holdouts. As digital transformation continues to accelerate at a rapid pace, mobile employees working from flexible locations (including offices, hot-desking spaces, homes, local cafes, hotels, etc.) create an inherent information leakage and data security risk, which in most cases, is unintentional. Organizations must not merely tighten, but completely redesign their policies and procedures around secure access to information across all access points and devices. In response to the demands of the hybrid workplace, cloud-based services are naturally increasing, but it can be difficult to keep track of the tentacles of the information spread in a multi-cloud environment. Knowledge around data residency, including what information you should hold, where it is stored, which regulatory framework(s) it is governed by, and who within your organization has access to it is critical to data security and privacy. In most parts of the world, businesses operate under local data regulations that dictate how the data of a nations citizens or residents must be collected, cleaned, processed and stored within its borders. The primary reason enterprises choose to store data in different locations is often related to regulations, data policies and taxes. However, companies are allowed to transfer data after complying with local data protection and privacy laws. In this scenario, businesses must notify users and get their consent before obtaining and using their information.AI can be used to aggregate, analyze, present data clearly and extract the most relevant information, so that businesses can make the right decisions around data sovereignty. Subsequently, AI can also be applied for content search and redaction whereby personal identifiable information is hidden in documents from unauthorized access and better integrate systems and overcome silos. From an information lifecycle viewpoint, AI can be used to identify and delete unnecessary data, as well as to support compliance and governance.

School shootings wont stop How can AI help? Commentary by Tim Sulzer, CTO and co-founder of ZeroEyes

In 2022, the U.S. saw more than 300 shooting incidents on school grounds. Clearly, existing security approaches are not enough to fully protect students and faculty against gun-related threats. Specially trained AI has the potential to detect brandished guns the moment they appear on a security camera, which is often several minutes before the shooter begins firing.Im not suggesting that AI has the capacity to differentiate between a lethal weapon brandished with ill intent and something that just looks like one (ex. toy gun, wallet, phone), but trained firearms experts certainly have this ability. The combination of AI-based gun detection that is trained on an extensive dataset and an expert eye can result in school staff and first responders receiving alerts with critical situational information in mere seconds. These extra minutes of advance notice could give schools time to lockdown and law enforcement time to arrive and make an arrest before the first shot is fired, or lead to quicker response times to treat injuries in the aftermath.

Preparing for AI regulation. Commentary by Triveni Gandhi, Responsible AI Lead and Jacob Beswick, Director of AI Governance Solutions for Dataiku

We know that regulating the use of AI is forthcoming and that complying with regulation will become important to organizations. While there are no AI regulations yet, there are standards that are trickling out of international standards organizations, NIST, and from some governments such as AI Verify in Singapore.The question that remains to be answered by organizations is: Should you start self-regulating in alignment with the intentions set out by these frameworks, even if obligations dont yet exist? We would argue that ChatGPT provides a good opportunity for this question to be asked and answered. We would argue further that the answer to the aforementioned question is: Yes, self-regulate. And that, fundamentally, this should look like: testing, validating and monitoring towards reliability, accountability, fairness and transparency. So where does self-regulation begin? First off, you need to understand the model youre using (risks, strengths, weaknesses). Secondly, you need to understand your use cases (risks, audiences, objectives). Being transparent about the use of these tools is important, and making sure all output is consistent and factual will be a differentiator for the most successful companies looking to use this tool.Being transparent doesnt just mean saying that youre using it; it means building out your organizations internal capabilities to speak to the model its leveraging, the use case its deploying, and what its done to ensure the ways the model is being used safely and in alignment with objectives set out. Without doing this, no matter how promising the use case, the organization is exposing itself to risks where there is a departure from whats been promised to end users. That risk could range from the financial to the embarrassing. Chat GPT is not the only AI that is in use its only the most popular for now. Regulation and oversight of other types of AI systems is still incredibly relevant and we shouldnt lose sight of that.

On Chief Data Officer & Data Governance. Commentary by Jame Beecham, Founder, and CEO ofALTR

Seeing Data Governance at the top of this list aligns with a number of leading indicators for CDO attention and spend we have seen at ALTR. With reduced budgets and head counts, we are hearing from the industry that base level Governance topics will take priority in 2023. Things like improving data pipelines for speed of data delivery, data security, data access streamline, and quality will take precedence over initiatives like lineage or data cataloging. I think a number of data catalog projects have been stalled or remain in jeopardy as the catalog workloads tends to boil the ocean. Look for small projects within data governance being completed quickly with tightly aligned teams. Key to this will be data governance tool sets that interoperate and work together without requiring large professional services spends to realize base level data governance practices such as security and pipeline improvement.

Machine Learning and Adaptive Observability Take off at Edge. Commentary by Maitreya Natu, Chief Data Scientist, Digitate

More organizations are running Machine Learning algorithms on edge devices, which allows for faster analysis and eliminates the need to transfer large amounts of data. Instead of the traditional model of large servers that analyze large volumes of data, ML on edge opens-up creative avenues to perform analysis at the source itself. Adaptive observability presents a very promising use case for ML at the edge. Adaptive observability is an intelligent approach to collecting just right monitoring data at the right time. The basic idea is to collect high-fidelity data when the system is unhealthy, and low fidelity data otherwise. The analytics engine at the edge can profile the normal behavior, assess the system health, detect anomalies, predict future behavior, and thus recommend the right monitoring approach. Adaptive observability is thus able to collect just the right amount of data at the source, and also detect any abnormal behavior at the origin itself.

Why data pipelines without lineage are useless. Commentary by Tomas Kratky, CEO and founder of MANTA

Everyone talks about data lineage nowadays, but most people consider it for regulatory and compliance reasons. It serves as documentation for auditors asking difficult questions about data origins or data processing to prepare key performance indicators. But at its heart, data lineage represents dependencies in our systems and is a key enabler for efficient change management, making every organization truly agile. Imagine a security architect, changing the way sensitive customer data should be captured, using data lineage to instantly assess impacts of those changes on the rest of the environment. Or an analyst using data lineage to review existing sources for an executive report to decide the best way to calculate new metrics requested by the management team. Lack of lineage only leads to reduced enterprise agility, making your pipelines harder to change and thus useless. Contrary to popular opinion, you dont have to force anyone to learn what metadata is for them to benefit from data lineage. It should be activated and integrated into workflows and workspaces people normally use to do their job. For example, information about the origin of data for a key metric should be available directly in the reporting tool. Or a notification sent to an architect about a data table and associated flow that can be decommissioned because no people or processes are using it. Even a warning shared with a developer as they try to build a critical business metric using a data source with questionable data quality in their data pipeline. That, and much more, is the future of (active) data lineage.

Automation Liberation: The new Self-Service approach to Cloud Migration. Commentary by fromNext PathwayCEO Chetan Mathur

With the recent boom of AI apps, tools and chatbots there is a growing interest in cloud hyperscale providers, such as Microsoft and Google, to incorporate these tools into their cloud platform for advanced AI and ML applications. The jump in the stock value of Microsoft following the announcement of its investment in ChapGPT is testament to the fact that the market likes these innovative solutions. To have these AI tools be meaningful, massive amounts of data needs to be pushed to the cloud. Turning on AI will be predicated on moving legacy applications (code, data platforms, and data) to the cloud. The movement of legacy applications has historically been very challenging. In our research we have seen that companies are attracted to the business benefits of cloud computing, but are hesitant to move large, legacy applications. Most legacy applications have been developed over years and there tends to be application sprawl This makes it difficult for companies to know with a high degree of confidence which applications should be migrated to the cloud. The lack of visibility into the data dependencies and, data lineage further complicates matters. Moreover, those companies that have migrated workloads, either on their own or with a Global Service Integrator, have commented that the migrations took too long and were costly. To answer this challenge, whats needed is a self-service code translation product that provides a high degree of coverage and performance. Customers can select the workloads they want to translate, and select their preferred cloud target, and the solution will perform the translation, automatically.

Quantum computings biggest key in 2023. Commentary by Classiq CEO, Nir Minerbi

Its well known that quantum computing offers the potential to break many of the cryptographic protocols that currently protect our most sensitive information. As a result, there is a pressing need to develop new cryptographic protocols that are resistant to quantum attacks. Furthermore, quantum computing can be used for a variety of military purposes, from developing new materials and advanced AI, to modeling complex systems. As a result, further investment in quantum computing is critical for the United States to maintain its competitive edge in the arms race.

Is low code the key to faster go-lives and revenue realization? Commentary by Deepak Anupalli, co-founder & CTO atWaveMaker

Low code enables professional developers to build products in a visually declarative manner, by abstracting commonly used functionalities as components. These components can be dragged and dropped onto an application canvas and then configured and customized per application. Such a methodology allows developers to tackle complexity with ease and build products almost 3X faster than traditional approaches. Components that represent an entire functionality bundled with UI, logic, and data can be stored in internal repositories that can be customized for any use case across the enterprise application landscape. This allows SMEs and business users to provide their expertise while building applications, and in turn, democratizes software development leading to better ideation and innovation.

Automation in the era of big data. Commentary by Tony Lee, CTO,Hyperscience

We live in a world characterized by big datasweeping sets of structured and unstructured information that organizations can use to drive improvements and accelerate business outcomes. But despite our best efforts, humans cannot keep up with the sheer amount of information that needs to be processed. Thats why I expect to see organizations prioritize automation solutions that can support teams in navigating data and immediately increase productivity and ROI in areas such as document processing. Those advancing their organizations with machine learning (ML) and advanced automation technologies are finding the most success in designing workflows that can process large amounts of data with a human-in-the-loop approach and continuous training. This strategy provides much-needed guardrails if the technology is ever stuck or needs additional supervision, while still allowing both parties to work efficiently alongside each other. If we can educate leaders better on how to unlock the full value of automation and MLand its guiding hand to achieving operational efficiencywell see a lot of progress over the next few years.

AI vs. Humans: Has Insurance Become Too On-Demand. Commentary by Justin Kozak, Executive VP atFounder Shield

I believe insurance could qualify for being too on-demand as we have seen some of insurtechs efficiencies cause complications around coverage and scalability. Insurtech solutions that entail automated underwriting are really the focus here. While they are great for more vanilla risk or smaller companies, the accessibility to securing coverage this way has bled into more complicated risk profiles (like the fintech or crypto industries) that require a professional touch. It is two-fold: for one, tech-dependent brokers are not providing the proper guidance to clients on what to apply for and, more important, how the coverage should be structured or their company classified. This can lead to complications in the event of a claim, as misclassification or poorly structured programs can lead to claim denials or gray areas in coverage applicability. Past that, there are automated underwriting shops as well, which can create scalability issues as more significant risks are being underwritten in line with lower-risk businesses. Once a true underwriter gets eyes on what slipped through the system, they quickly non-renew or increase rates exponentially to match the true risk. Ultimately, I believe on-demand or automated insurance solutions have a place in our industry. Still, there is a need to recognize a proverbial line whereby clients, brokers, and underwriters acknowledge the need for professional attention and expertise.

How Digital Twins Improve Revenue Cycle Management. Commentary by Jim Dumond, Sr. Product Manager,VisiQuate

Digital twins are virtual representations or models of machines or systems that exist in the physical world. For example, because testing a wind turbine in the real world is time-consuming and costly, a team of scientists will instead build a digital twin that is perfect representation of a turbine. Then, the team will test the model by presenting it with different environmental factors and breakdowns. Essential to the development of digital twins is a technique known as process mining, which involves a deep dive to study a process from end-to-end, then to determine how the process deviates from expectations or produces unexpected outcomes. In the revenue cycle world, process mining requires analyzing all the data that is captured by a health system, such as order systems, referral management systems, building platforms, electronic medical records, payer data, and EDI data. Then, this disparate data must be collected into one location, enabling revenue cycle leaders to understand how the data fits together to convey the current state of an account. After performing these tasks, the health system has developed a digital twin of its revenue cycle processes. Once generated, digital twins can be used to simulate the effects that process changes may create for factors like accounts receivable, payer relationships, and staffing. Importantly, developing a digital twin must always begin with a robust data strategy to create a virtual representation of a revenue cycle encounter, such as a claim or invoice, to fully account for an entire process.

MSFT Lays Off AI Ethics Team. Commentary by Noam Harel, CMO & GM North America atClearML

Microsofts decision to fire its AI ethics team is puzzling given their recent investments in this area. Ethical considerations are critical to developing responsible AI. AI systems have the potential to cause harm, from biased decision-making to violating privacy and security. The ethical team provides a crucial oversight function, ensuring that AI technologies align with human values and rights. Without oversight, the development and deployment of AI may prioritize profits or efficiency over ethicality, leading to unintended and or harmful consequences.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:https://twitter.com/InsideBigData1

Join us on LinkedIn:https://www.linkedin.com/company/insidebigdata/

Join us on Facebook:https://www.facebook.com/insideBIGDATANOW

Read more:

Heard on the Street 3/20/2023 - insideBIGDATA

Read More..

Meet the Winners of ‘Women in AI Leadership Awards’ at The Rising … – Analytics India Magazine

Analytics India Magazines Women in Tech Leadership Awards 2023 at The Rising celebrates leaders who have made a longstanding impact in the AI/ML and tech spaces with years of hard work, commitment and dedication.

We received submissions from leading firms that nominated people who have made massive contributions and demonstrated expertise in gaining business value. All submissions were assessed by our panel of editors and industry veterans, and awardees were selected after careful reviewing.

Check out the winners below:

With over 12 years of experience in digital transformation programs for Fortune500 clients, Anjali specialises in cloud data engineering, product engineering and multi-cloud MLOps. Her leadership skills have driven significant improvements in business metrics for all her clients.

Debarati currently serves as the chief product officer at both Futurense Technologies and Miles Education and has over 15 years of experience in building scalable tech talent. Her expertise lies in building and implementing data-driven skilling and setting up innovative products and business models.

Divya is a seasoned expert with 13 years of experience in digital transformation and consulting. Having worked with top consulting firms like Deloitte, Accenture, and Cognizant, Divya now heads the IT division for Dyson India. Divya is also a PhD candidate in Digital transformation from NMIMS Mumbai and has published several papers in the field.

Jayashree is the engineering head at Wipro Digital Ltd with extensive experience in leading digital transformation initiatives for US clients. With expertise in Agile software development and AI/ML-powered platforms, Jayashree has successfully delivered global transformation programs for banking, media, and healthcare clients. A graduate from Madras University, Jayashree studied Disruptive Strategy and Innovation and Sustainable Business Strategy at the Harvard Business School.

Aside from her role as the senior manager of the companys data science team, Kavitha is also a global co-leader for their AI/ML projects. With over a decades worth of experience in all aspects of data science, Kavitha has built solutions for top retail clients in dynamic price optimisation, price microzoning, markdown optimisation and revenue growth management engines. Kavithas contributions have delivered annual cumulative benefits of more than USD 20 million.

Armed with experience in managing more than 100 data scientists, analysts and intelligent automation experts, Khushboo is skilled in data engineering, RPA automation and Tableau data visualisation among other things. Khushboo is a pioneer in developing voice biometrics for call centre augmentation and implementing sentiment analysis and survey analysis using AWS Comprehend and NLP.

Having led successful end-to-end conceptualisation, development, and deployment of data integration solutions and enterprise applications, Megha has over 13 years of experience in data and technology strategy, consulting and enterprise application design and development.

At Genpact, Megha manages full-stack AI/ML solutions, strategic offerings, and new initiatives in cloud analytics and MLOps, working closely with Hyperscalers to create industry-specific offerings.

Monali has been with Eaton since 2009 and has led various groups and COEs, including the digital segment software groups and product cybersecurity and Distributed Energy Resource Management System (DERMS) CoEs.

With a BE in Electronics from the University of Pune, Monali has an impressive track record of leading product groups and technology CoEs, expanding teams to over 250 engineers, and operationalising labs. She has also filed one invention disclosure and presented two technical papers on industrial IoT and retrofitting wireless connectivity for hydraulic components.

Neha is a seasoned analytics and AI solutions provider with over 18 years of experience with clients from investment and retail banking, retail, and tech, media and entertainment (TME) verticals. Currently serving as the client partner and head of analytics consulting-India for TME at Fractal Analytics, Neha manages a team of more than 250 members and has worked with Fortune 500 companies in over 30 countries.

She specialises in enabling customer lifecycle management and product decisions through analytics. Notably, Neha is the product and vision owner for Trial Run, Fractals experimentation product, which has enabled a major Fortune 500 Retail company to innovate by running and analysing over 200 experiments annually in their store.

Priya is a veteran in IT security with over 23 years of experience in cyber risk, cloud security, data privacy and protection, access governance and compliance. She has played a significant role in developing next-gen managed security platforms and SaaS (Security-as-a-service) platforms. She is also the diversity and inclusion champion at Happiest Minds and has conceptualised innovative initiatives to promote diversity, like a women-only hackathon annual event at the company.

Rashmi leads the platform development and tooling practice at Socit Gnrale and directs the engineering strategy and adoption of hybrid cloud platforms. She also leads global communities like the development and open source experts guild and the developers community for DevSecOps enthusiasts.

She was recognised as Manager of the Year in 2021 for her efforts in driving innovation and inclusivity while mentoring future leaders.

Reji is a key player in shaping strategy, engineering and governance of the cloud and software Agile/CICD platform for all Verizon IT divisions globally. As the India chapter lead for WAVE (Womens Association of Verizon Employees), Reji helps with creating a pipeline for women leadership. She has won numerous awards for her work including the TrendSetter Woman in IT by eWIT in 2020.

A technical and strategic leader with around 20 years of experience in data-driven tech solutions, Ruma is a principal engineer at Intel and leads the companys analytics and DevOps group in Bangalore. Having worked with companies like IBM Labs, Juniper and Novell in the past, she is an expert in data science, MLOps, statistics, AI/ML-based product development, research and consulting in hybrid cloud environments.

She also has three patents to her name and acts as an advisor to industry and government AI/ML initiatives driven by non-profit organisations across India.

Safala started her journey as a hardware engineer and rose to the position of SME at Logica within a year. After a stint as a program manager at IBM, she currently leads a team of more than 160 engineers in day-to-day services for public cloud services at Kyndryl.

Safala is also a mentor to more than 50 employees as a part of the Women in Technology program.

Seema has developed a recommendation engine for short video content, a vernacular voice bot for multiple Indian languages apart from working with agritech customers to provide accurate recommendations on crop care and financing for farmers.

She is also an active advocate for women in tech and serves on the DEI (Diversity, Equity and Inclusion) Council for Google Cloud, India.

Shanthi has been delivering transformative data and analytics solutions for over 20 years to large scale enterprises. She is behind the go-to-market strategies, brand awareness and driving demand for customer and talent acquisition for many clients. Shanthi has also developed reusable solutions for common data and analytics problems, bringing home several awards for InfoCepts.

At Unilever, Sindhu is currently responsible for leading the industrialization of AI at scale for its distributor management systems. In a career spanning 26 years in business transformation, Sindhu has a track record for implementing data solutions for top retail, entertainment, and manufacturing companies like NBC, Marks & Spencer, ABInBev, Dell, Tesco and Unilever.

Sneha is currently leading the overall delivery of cloud-based data analytics products in the life sciences clinical domain at Saama. She has a proven track record of account management and delivery of complex IT programs in the clinical data analytics space with expertise in custom implementation of clinical data platform projects in data analytics and AI/ML.

She has worked with top 20 pharma companies and helped them quicken the drug research process using data analytics and AI/ML.

Across the seven years of working with Tiger Analytics, Soumya has built AI/ML-based end-to-end solutions for clients from retail, banking, technology and auto sectors. As a delivery partner, she handles a portfolio of more than 90 clients. Soumya is an expert at designing analytics roadmaps and discovery workshops for client organisations.

With a background in economics and statistics, Susmitas work is focused on financial analytics, pricing strategies and marketing analytics. She co-founded an analytics firm, SCAnalytics, which developed and deployed cutting-edge analytical software solutions for global clients before being acquired by Marketing Management Analytics in 2010.

Currently, Susmitas work at Deloitte is around building analytics assets, complex global data engineering and analytics transformational engagements while also enabling DevOps delivery models.

More:

Meet the Winners of 'Women in AI Leadership Awards' at The Rising ... - Analytics India Magazine

Read More..

How Is The Medical Wearable Landscape Evolving With Advanced … – Med Device Online

A conversation with Jacob Skinner and Will Berriss, Thrive Wearables

Conventional medical devices have revolutionized patient diagnostics and treatments and improved quality of life across a staggering breadth of applications. Applying software techniques, including data science, machine learning (ML), and general artificial intelligence (AI), has many ethical and regulatory dimensions. However, the future is heading rapidly toward a point where these techniques are driving a new wave of innovation and positive health impacts.

Wearable technology is advancing at a rapid pace and with this advancement comes the opportunity to capture a very different kind of continuous health data than that recorded in a hospital setting. How do we combine incredible new sensing technologies with robust software to take full advantage of the opportunity for intelligent, safe personal health monitoring?

Jacob Skinner is the CEO of Thrive Wearables, where he uses wearable technology to improve healthcare and democratize access to it. He completed a D.Phil. in experimental physics and has designed human-centered technology for over 10 years. He has designed commercially available electrophysiology sensors and applications at the University of Sussex's Sensor Technology Research Centre.

Will Berriss is a software engineer with over 20 years of experience in the field. He is a Chartered Engineer (CEng) and member of the Institution of Engineering and Technology (MIET) and has a Ph.D. in medical image processing.

Skinner: In the past, medical devices were things that we just used in hospitals and in doctors offices. What we see now is what I call the consumerization of medical devices, in which they are still medical device certified and built within strict standards, but they are not necessarily life critical; they are more geared to monitoring, remote patient care, and in supporting virtual wards.

Its a testament to advances in the industry, because these medical devices can be used by people on their own terms and they are much more accessible. They're not as cumbersome physically or as complex to use. What is really interesting as well is the way in which this is happening and the potential in this space. For example, the Apple Watch is a consumer electronics device, but under very particular conditions it's also a medical device, measuring specific ECG signals. It's still clear from a regulatory point of view which devices are medical and which are not, but for the user the boundaries are increasingly broad.

Berriss: It feels like the current landscape is not quite there yet. We're sort of dipping a toe in the water, but it would be great if apps and devices could become more heavily standardized and adopted. For this to happen, the measurements that tech could generate would need to be delivered to clinicians and be considered reliable for care.

This could result in more personalized healthcare and treatment that is even more tailored to a patient.

Skinner: There's a natural dissonance between medical devices and artificial intelligence that is tricky to navigate from a medical risk point of view. From a regulatory perspective, it's very hard to use dynamic algorithms because of the diversity of outcomes they might lead to, so you have to constrain the models very carefully and ensure outcomes are within specified boundaries.

Berriss: The way I see things, there is a lot of data being captured by private companies but, going forward, some or all of that data could be made available centrally, and by centrally I mean the NHS in the U.K or, perhaps Medicare or Medicaid in the USA. A huge amount of data processed intelligently about one thing, say a tumor, or even in relation to multiple patients, could give a much better understanding of the boundary of the tumor, and, for example, how quickly it may or may not grow, which ultimately helps those managing it to make informed decisions.

Skinner: Yes. That's the core of it. These opportunities exist because of wearable technology and advances in sensors, which broaden the concept of medical devices. For example, tumor scanning has always been data heavy, but the breadth of what data is relevant now is definitely changing.

Nobody used to know how many steps they took in a day; now its something most of us are aware of and can easily find out, and you can derive some basic stuff relating to exercise, but then you can take that so much further. For example, you could be measuring heart rate constantly. If you're at risk of heart disease or heart failure and there was a chance that you might be able to detect it just that little bit earlier, I think that's where the bigger opportunities are, in these big burdens on the NHS and big causes of death. But knowledge isnt always a good thing, and you cant just give people information that alarms them or that they cant interpret properly, but you also cant shield them from their rights to be informed. Ultimately, if you can predict that someone is going to get ill, thats a whole other resource requirement.

Berriss: Definitely, these techniques are being used in the financial world and in engineering and space, for example. As more AI and data science work happens, and it becomes more popular, more of those insights and improvements will get generated and there will be more and more that is relevant to the health industry.

If you had a room full of people and half were healthy and some had a heart condition and then you go and take some measurements and thats the whole point, really, because the pattern could be in the data somewhere we dont know about already AI would typically spot those differences in the data that might not be visible on the face of it. And thats where it becomes really interesting pattern recognition and solving things that humans cant.

Skinner: There are so many answers here! Let's start with democratization. If people are looking after their own health, then that's a great starting point, because health professionals cannot look after everyone at all times. First, these devices could start constantly measuring users health and feeding information back to users and potentially escalating to medical professionals as needed. That's kind of the bread and butter win in terms of wearable tech, constant data, and predictive and preventive healthcare.

So, for example, is somebody moving in the wrong direction and is two years away from a heart attack? Having that kind of insight would be incredibly beneficial in order to have two years worth of a mitigation strategy and to empower patients to take ownership of their physical health and make informed lifestyle choices. In regard to virtual wards and remote patient monitoring, its blurring the boundary between hospitals and homes, so I think the real advances will be in semi-medical technologies or monitoring devices that are not critical care but are really good at predictive care.

Berriss: I think it would improve people's health by involving them more. Currently, you can take your blood pressure at home, and when you go to see a clinician you could provide an attachment with vitals data from the last week. But it depends on whether we can get to a point where certain approaches to measuring health signals can be so standardized that users and medical professionals can feel confident in the validity of the data. This obviously needs regulating, so this data is medically useful and doesnt need to be replicated afterward, and thats the gap that needs closing.

Berriss: Safety and security ultimately comes down to encrypting data and being careful with the way in which you share it and also how you share it, especially with continuous monitoring. There is a lot of progress in this already happening, with companies like Apple, which has encrypted the Apple Watch so it cannot connect with just any Bluetooth device to retrieve data from it. So, in some respects, what we need is already possible, but on the flip side is whether this can be legally proven. Would it hold up in a court of law that we have proof the data hasn't been tampered with or been placed into the wrong hands?

Skinner: Medical software has to be highly validated and tested, as it's much more conservative in terms of what it's connecting to, how it's processing data, who's got access to it; essentially, it's all much more of a sandbox. So, there's a general additional cost and time investment required in medical software processing physiological data. I believe that an even more crucial question to consider is how data can be managed and stored securely, given that it contains valuable personal information that is at risk of being misused. The value of this data is enormous if it is used appropriately, but there is also a significant risk.

The concept of ownership through a blockchain or other associated access protocols is fascinating. When combined with encryption and health records, it creates a very interesting space. I believe that this combination will become increasingly important and have a significant impact in the near future. In addition, if individuals have more data and want to access it on their own terms for their own healthcare reasons, they may need to be given controlled access to other people's aggregated data on certain terms to compare their data and make conclusions. This represents a need for a sea change in how data is currently used, which is extremely top-down and stored in big databases with limited access. It is not easy to shift data around or gain insights from it, and it is not currently being used in a way that could be incredibly useful.

Furthermore, there is the potential for buying and selling of data in a way that respects privacy and anonymity, which could be useful for research purposes. However, this is not being done openly or on a large scale yet.

Berriss: One way to anonymize the data could be by generating random number codes assigned to users instead of real names. In the COVID-19 tracking apps, they used a number code to identify people instead of their real names, and this could be potentially one avenue to follow. So, its not impossible, it just needs more work.

Berriss: I suppose using a blockchain to keep things secure and establish contracts between individuals and companies or medical bodies is important from an ethical perspective. For many years, people have been concerned about their personal data being used against them. For example, if you measure certain health metrics and discover that you are predisposed to a condition that could lead to your death in 10 years, you may not want this information to be shared with your health insurance company because they could deny you coverage for known or predicted conditions.

Therefore, it is crucial for individuals to understand the potential implications before using devices that track their health data. It is important for companies to disclose up front what kind of information they may discover and with whom they may share this information. This is especially true for individuals who are more vulnerable and may have health data that could lead to negative consequences.

Skinner: Yes, this is a critical issue. If we don't implement proper measures to secure personal data using a blockchain and other technologies, it is highly likely that this information will be vulnerable to hacking. The consequences of such breaches could be severe, especially for healthcare agencies like the NHS. In terms of regulation, I think there are two different issues taking place. The first is the exponential increase in the amount of data that is being accessed and stored in various databases. This increase in data flow is causing a problem, as there is a million times more data and millions of different nodes. The concern is how this data is being accessed and stored and the potential risks associated with it.

The second issue is related to the relaxation of medical standards. While there may be good reasons for this relaxation, there is a risk associated with it. The FDA is allowing many technologies to pass that traditionally wouldn't have. Examples could include many digital solutions that have come into being due to the emergence of mobile phones. These digital systems are usually classified outside of the more stringent medical device regulations (under what is called a 510k submission), as well as a myriad of Class I and II devices, which are essentially just sensing and passing on the information in a qualified way. Taking this to the extreme leads to what are often termed wellness devices, which are very much positioned (in the market) as non-medical in nature. This creates a moving target in terms of what medical efficacy is and what medical technology credibility is. It's hard to know quite where the line is, and there's a strong chance that technology advances and regulation will not be well aligned most of the time. This is a nuanced discussion, but it's important to be aware of the potential implications of fast-tracked technologies in the medical field.

Berriss: I think the key difficulties with integrating with anything we've discussed on the technical side lies in the sheer amount of data involved. First, there are network bandwidth issues that need to be maintained if you want to transmit the data elsewhere, and if you don't, then you'll need to process it locally, which presents its own set of challenges. It's often not feasible to transmit such large amounts of data.

If you integrate into a mobile application, while mobile phones are capable of handling complex tasks, there are still challenges with what works, for example, what processing you do and what data will be transmitted. There also may be a question about what appetite users have for a device that's so powerful, which could also impact how such a device is created and define what form it takes.

Skinner: I believe the main challenge lies in proving the value of wearable technology. The equation that relates the value of using the technology to the friction it creates is well recognized. Essentially, if someone perceives a lot of value from using a piece of technology, they are more likely to adopt it. However, human-centered technologies that require close proximity can be quite invasive to personal space, making it difficult to overcome this barrier. Therefore, in order to encourage adoption, the value of the technology must be proven to be very high.

Even in cases where it could mean the difference between life and death, such as with wearable technology that could detect a potential heart attack, people may still resist using it if it feels too bulky or obtrusive. Additionally, even if the technology alerts them to a potential health issue, they may not take action to address it. In short, the biggest challenge is improving the adoption of wearable technology by increasing its value beyond the friction it creates.

Skinner: Essentially, I would say more of what we've already discussed. The themes we talked about all have timelines. These include the democratization of healthcare, using preventive and predictive measures for people to better understand and engage with their health, and reducing the number of hospital visits through these measures. Additionally, helping people go home sooner by utilizing virtual wards and remote monitoring are the key predictions.

Berriss: In my experience, I find you tend to know what's going to be the next big thing when you hear it as a recurring topic in the media or in this space. For example, Ive seen many discussions about heart rate and heart rate variability. If you ask me what's immediately next, I would say something in that space.

See more here:

How Is The Medical Wearable Landscape Evolving With Advanced ... - Med Device Online

Read More..

Mithra-Ai Solutions: Vendor Analysis Spend analytics solution … – Spend Matters

Image by Gerd Altmann from Pixabay

This Spend Matters PRO Vendor Analysis provides an overview of Mithra-Ai Solutions and its offering for spend analytics.

Many organizations handle spend data in one of two ways: a series of spreadsheets or a system so complex that only data scientists understand it. Data kept solely in spreadsheets is hard to analyze, update properly and track. Data in complex systems can be useful but often requires time and expertise to dig through it, which could create a bottleneck.

Mithra aims to solve these problems by finding a middle ground: it is a complex data system, but it operates on the tenet of being user-friendly. Users upload their spreadsheets, and Mithras AI does the hard work for them. While Mithras AI works in the background to complete complex data analysis, the user-facing aspect of the product is simple and easy to understand.

Heres why Mithra-Ai matters:

This Spend Matters PRO Vendor Analysis explains what differentiates this vendor, gives an overview of its capabilities and competitors, provides tech selection tips and closes with key analyst takeaways.

Original post:

Mithra-Ai Solutions: Vendor Analysis Spend analytics solution ... - Spend Matters

Read More..

1 Warren Buffett Stock With 76% Upside to Buy Now, According to … – The Motley Fool

Warren Buffett has a knack for picking great stocks. Under his leadership, Berkshire Hathaway has built an investment portfolio worth over $308 billion, and many stocks in that portfolio have at least doubled in value, including Apple, American Express, and Coca-Cola.

MoffettNathanson analyst Sterling Auty believes another company will soon join that list. Berkshire invested $735 million in Snowflake (SNOW -1.15%) at its IPO price of $120per share in September 2020. Auty has a 12-month price target of $242 per share on Snowflake, which implies 76% upside from its current price (and roughly 102% upside from Berkshire's cost basis).

Is it time to buy this Buffett stock?

Companies depend on an ever-growing number of digital technologies that create tremendous amounts of data on a daily basis. But that data is often stuck in disparate systems, making it difficult for organizations to derive value from it. Snowflake aims to solve that problem with its Data Cloud.

The Data Cloud integrates multiple workloads that have traditionally required numerous point products, allowing businesses to ingest, store, and analyze the data spread across their IT ecosystems. The platform also helps collaboration by facilitating the secure sharing of data, and it supports data science by facilitating that transformation of data for use cases like machine learning and advanced statistical analysis. Additionally, the Powered by Snowflake program allows organizations to build and operate data-driven applications in the Data Cloud.

Suffice it to say Snowflake offers a feature-packed platform, and no other product on the market provides the samefunctionality. But customers also benefit from its infrastructure-neutral design. The Data Cloud runs across all three major public clouds -- Amazon Web Services, Microsoft Azure, and Alphabet's Google Cloud Platform -- giving customers the freedom to work with the cloud vendor (or vendors) of their choosing.

Snowflake tailors its Data Cloud to specific industries. For instance, the company launched the Telecom Data Cloud earlier this year, building on other vertical-specific products for retailers, advertisers, and financial services providers. Those tailored products pair the functionality of the Snowflake platform with industry-specific partner solutions and datasets, accelerating time to value for customers. That go-to-market strategy is highly effective.

Snowflake's customer count climbed 31%to 7,828 in the fourth quarter of fiscal 2023 (ended Jan. 31, 2023), and the company reported a revenue retention rate of 158%, meaning the average customer spent 58%more over the past year. Very few companies ever achieve a revenue retention rate that high, which speaks to the value the Data Cloud creates for customers.

On that note, full-year revenue rose 69%to $2.1 billion, and the company generated free cash flow (FCF) of $496 million, up sixfold from the prior year. That represents a solid FCF margin of 24%. Those results are particularly impressive given the challenging economic environment.

Looking ahead, management expects product revenue to reach $10 billionby fiscal 2029, which implies revenue growth of about 30% annually over the next six years. The company is also targeting a FCF margin of 25%, which implies FCF of $2.5 billion in fiscal 2029.

The investment thesis is simple: Snowflake helps companies use big data to build applications and make informed decisions, and businesses that use data effectively stand to gain a competitive advantage over their peers. Additionally, its cloud platform improves operational efficiency by consolidating a variety of workloads, while eliminating the need to manage the underlying infrastructure. That puts Snowflake in front of a $248 billionaddressable market, according to management.

However, shares currentlytrade at 21.4 times sales. That is a big discount to the historical average of 75.3 times sales, but it is far from cheap. Snowflake stock is down 65% from its high, but its current valuation leaves plenty of room for further share-price declines, especially in the event of a recession. For that reason, risk-averse investors should steer clear, but risk-tolerant investors should consider buying a small position in this Buffett stock today.

As a final caveat, investors should never lean too heavily on Wall Street's price targets. The odds of a 76% return over the next year are remote at best. Investors that buy the stock should be prepared to hold for at least three to five years.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. American Express is an advertising partner of The Ascent, a Motley Fool company. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Trevor Jennewine has positions in Amazon.com. The Motley Fool has positions in and recommends Alphabet, Amazon.com, Apple, Berkshire Hathaway, Microsoft, and Snowflake. The Motley Fool recommends the following options: long January 2024 $47.50 calls on Coca-Cola, long March 2023 $120 calls on Apple, and short March 2023 $130 calls on Apple. The Motley Fool has a disclosure policy.

Read more:

1 Warren Buffett Stock With 76% Upside to Buy Now, According to ... - The Motley Fool

Read More..

Council Post: The Rise of Generative AI and Living Content – Analytics India Magazine

Marshall McLuhan once said, We shape our tools and thereafter our tools shape us. The concern about technology entering every human space is not novel. With various developments through time such as processors, digital photography, creative software editing suites, music editing software, and computer graphics, the discourse between human creation and technology has continued through time.

Humans are capable of leaps of logic that machines are yet to catch up with. Basic computer programming is merely one level above AI. Recent advances and accomplishments in AI are indubitably tied to human intellectual capacity. Although machines are still capable of much more than the human brain, humans differ significantly in terms of application of their knowledge by using logic, reasoning, understanding, learning, and experience.

But concerns surrounding man versus machine saga have often lost ground to reality. It is true that a number of advancements in technology have made human involvement redundant in certain aspects of the creative process. However, even as the concern of being replaced is undoubtedly real, it is unlikely to happen that humans would be replaced by machines completely.

The article will be talking about how content is going to evolve after the coming of generative AI. At the same time, it will address whether wed really need writers when we have AI to write for us. Will content evolve or will it get too automated for readers?

According to a Reuters source, ChatGPT is the creator or co-author of more than 200 books that are available on Amazon as paperbacks or e-books. The investigation also finds that as Amazon standards do not compel users to acknowledge the usage of AI in their books, the number of books authored by AI may be significantly higher than the number of books actually listed. More than 200 e-books in the Kindle shop on Amazon attributed ChatGPT as the author or co-author during the month of February alone. As more books are published, Amazon has introduced a brand new sub-genre devoted to books about using ChatGPT that are wholly authored by ChatGPT itself.

At present, readers do not engage with lengthy content but instead prefer to consume media that is pertinent, succinct and tailored to their interests. This shift towards a more direct approach is something that the consumers are demanding for.

Customers want content that caters to their unique needs and interests. Such content may be tailored to the preferences of certain audiences using AI and other cutting-edge technology, thereby giving each user a unique experience. The transition to interactive content is altering how we absorb information as well. Interactive content offers a more dynamic and engrossing way to information distribution, ranging from static images and text to immersive and engaging experiences. Every story should either be relatable or something that manifests a possibility of happening.

Simultaneously, there is an audience that wants narrative content, which is typically referred to as long-form content. AI can make long-form content by using a technique called natural language generation (NLG). NLG is a subset of artificial intelligence that focuses on generating human-like language from data.

Some people will prefer shorter, more precise content that gets straight to the point, while others will appreciate the depth and nuance that long-form content can provide. Additionally, the type of content and the purpose it serves can also impact its reception. For example, people may be more likely to consume long-form content for entertainment or educational purposes while they may prefer shorter, more concise content for news or information that needs to be consumed and understood quickly.

Its also worth noting that the rise of generative AI does not necessarily mean the decline of long-form content. While generative AI may be able to create coherent text, it may not necessarily be able to create engaging, thought-provoking content that resonates with readers. In many cases, long-form content is valued precisely because it provides an opportunity for in-depth exploration of complex topics, which may be difficult for generative AI to replicate.

A theory of concept localisation highlights a major challenge in comprehending unfamiliar ideas when they are presented without sufficient context. As human beings, we tend to rely on metaphorical explanations to make sense of complex concepts. We learn best when someone provides us with a metaphor that allows us to understand and contextualise the underlying meaning or connotation of the concept at hand. With the advent of advanced language models, one can take a given concept and translate it into metaphors that are tailored to an individuals unique background, making it easier for them to understand and absorb the idea.

While AI-generated content is yet to perfect this process and may require significant refining by a human editor, it has the potential to greatly speed up the content creation process and help businesses and individuals produce high-quality, engaging content at scale.

But just when we believed that this is the extent of what technology is capable of, something else comes along. No user can imagine how living content will develop in the future. Living content can be tweaked for consumers, in the sense that it can be personalised to meet the needs and interests of individual readers. This can be accomplished through the use of data analytics and machine learning algorithms that analyse a readers behaviour and preferences and then curate content that is tailored to their interests.

Living content can take many forms, including blogs, news articles, social media updates, and more. The key characteristic of living content is that it is constantly updated so that consumers can return routinely to get the latest information.

Mark Twain once said, There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope. We give them a turn and they make new and curious combinations.

Generative AI is trained on existing ideas to present seemingly new content. Conversely, the value of ideas with generative AI gets enhanced through the increased efficiency and scalability of idea generation. With generative AI, it is possible to create a large number of unique and original ideas in a relatively short amount of time, which can be particularly valuable for industries that rely on creative output, such as advertising and marketing. It is however noteworthy that, at present, humans are the only intellectual beings capable of such leaps of logic and epiphanies.

Generative AI can also improve the quality and diversity of ideas generated, as it can draw on a vast amount of data and knowledge to create new and innovative ideas. This can help businesses stay ahead of the competition by providing them with unique and valuable insights that would be difficult or time-intensive to obtain through traditional research methodologies.

Another way that the value of ideas with generative AI gets better is through the ability to personalise ideas based on individual preferences and needs. With generative AI, it is possible to create content that is tailored to the specific interests and preferences of an individual, which can improve engagement and drive better outcomes.

In this era of content, the use of technology, such as AI and data analytics, is becoming increasingly important as it can help content creators personalise their content, improve its quality, and reach their target audience with greater efficacy. AI writing has arrived and is here to stay. Once we overcome the initial need to cling to our conventional methods, we can begin to be more receptive to the tremendous opportunities that these technologies present. Not only do they offer writers the chance to advance from being merely word processors to thought leaders and strategists, they also quicken the pace of content creation significantly.

As this technology advances, authors will be able to devote more of their time to deep thought, developing their creative visions and original viewpoints. The majority of those who will profit from this inevitable change in the industry will be writers with original ideas. By expressing these thoughts with impact, clarity, and conciseness, the world of content creation is looking at a renaissance of its own.

This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill out the formhere

Continue reading here:

Council Post: The Rise of Generative AI and Living Content - Analytics India Magazine

Read More..