Page 2,427«..1020..2,4262,4272,4282,429..2,4402,450..»

Wildfire dataset could help firefighters save lives and property – UC Riverside

A team at UC Riverside led by computer science assistant professor Ahmed Eldawy is collaborating with researchers at Stanford University and Vanderbilt University to develop a dataset that uses data science to study the spread of wildfires. The dataset can be used to simulate the spread of wildfires to help firefighters plan emergency response and conduct evacuation. It can also help simulate how fires might spread in the near future under the effects of deforestation and climate change, and aid risk assessment and planning of new infrastructure development.

The open-source dataset, named WildfireDB, contains over 17 million data points that capture how fires have spread in the contiguous United States over the last decade. The dataset can be used to train machine learning models to predict the spread of wildfires.

One of the biggest challenges is to have a detailed and curated dataset that can be used by machine learning algorithms, said Eldawy. WildfireDB is the first comprehensive and open-source dataset that relates historical fire data with relevant covariates such as weather, vegetation, and topography.

First responders depend on understanding and predicting how a wildfire spreads to save lives and property and to stop the fire from spreading. They need to figure out the best way to allocate limited resources across large areas. Traditionally, fire spread is modeled by tools that use physics-based modeling. This method could be improved with the addition of more variables, but until now, there was no comprehensive, open-source data source that combines fire occurrences with geo-spatial features such as mountains, rivers, towns, fuel levels, vegetation, and weather.

Eldawy, along with UCR doctoral student Samriddhi Singla and undergraduate researcher Vinayak Gajjewar, utilized a novel system called Raptor, which was developed at UCR to process high-resolution satellite data such as vegetation and weather. Using Raptor, they combined historical wildfires with other geospatial features, such as weather, topography, and vegetation, to build a dataset at a scale that included the most of the United States.

WildfireDB has mapped historical fire data in the contiguous United States between 2012 to 2017 with spatial and temporal resolutions that allow researchers to home in on the daily behavior of fire in regions as small as 375-meter square polygons. Each fire occurrence includes type of vegetation, fuel type, and topography. The dataset does not include Alaska or Hawaii.

To use the dataset, researchers or firefighters can select information relevant to their situation from WildfireDB and train machine learning models that can model the spread of wildfires. These trained models can then be used by firefighters or researchers to predict the spread of wildfires in real time.

Predicting the spread of wildfire in real time will allow firefighters to allocate resources accordingly and minimize loss of life and property said Singla, the papers first author.

The paper, WildfireDB: an open-source dataset connecting wildfire spread with relevant determinants, will be presented at the 35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks and is available here. A visualization of the dataset is available here. Eldawy, Singla, and Gajjewar were joined in the research by Ayan Mukhopadhyay, Michael Wilbur, and Abhishek Dubey at Vanderbilt University; and Tina Diao, Mykel Kochenderfer, and Ross Shachter at Stanford University.

Header photo:Mike Newbry on Unsplash

Link:

Wildfire dataset could help firefighters save lives and property - UC Riverside

Read More..

Regents approve plan for computer science and information building | The University Record – The University Record

The new 163,000-square-foot Leinweber Computer Science and Information Building on North Campus will, for the first time, bring together under one roof the School of Information and the College of Engineerings Computer Science and Engineering Division.

With the schematic drawing approved by the Board of Regents on Dec. 9, the state-of-the-art facility is coming into focus.

In addition to being an innovative learning environment for students, the facility represents a convergence of disciplines that will strengthen collaboration, foster innovative research partnerships and lead to the development of breakthrough technologies.

When we combine our computing and information expertise, we can drive innovation and help solve some of humanitys greatest challenges in modern medicine, transportation, and smart infrastructure, said Alec Gallimore, the Robert J. Vlasic Dean of Engineering, Richard F. and Eleanor A. Towner Professor, Arthur F. Thurnau Professor, and professor of aerospace engineering.

In October, the Board of Regents voted to name the facility the Leinweber Computer Science and Information Building in recognition of a $25 million gift from the Leinweber Foundation. The $145 million Leinweber Computer Science and Information Building is scheduled to be completed in the summer of 2025.

Currently, CSE and UMSI are located on different campuses a few miles apart. Once completed, the new facility will eliminate the need for top talent to choose between working in a CSE or UMSI environment, removing barriers between like-minded colleagues.

This convergence of disciplines also will strengthen the academic culture, promoting the fusion of human-centered and technical perspectives in critical areas such as artificial intelligence, human-computer interaction and information privacy and security.

The School of Information is broadly interdisciplinary, and co-location with one of our core disciplines computer science unlocks fresh opportunities for instruction and collaboration, said Thomas A. Finholt, dean of UMSI. Working together, we can more effectively create and share information, with technology, to build a better world.

In the last 10 years, the number of students enrolled in undergraduate and graduate programs in both CSE and UMSI has quadrupled. The new building will provide much-needed space to meet the increasing demand for computer science and information graduates for research, industry and education.

CSEs academic programs in computer science, computer engineering and data science are some of the fastest growing at the university, said Michael Wellman, the Richard H. Orenstein Division Chair of Computer Science and Engineering and the Lynn A. Conway Collegiate Professor of Computer Science and Engineering.

This facility will enable us to amplify our research collaborations with UMSI, and grow to meet the societal imperative to provide the best education for more future computer scientists.

Beyond enhancing the research and academic missions of UMSI and CoE, the Leinweber Computer Science and Information Building represents a crucial step in the universitys carbon neutrality mission. The facility is planned to include a geothermal heating and cooling system as a demonstration project.

Read more here:

Regents approve plan for computer science and information building | The University Record - The University Record

Read More..

Council Post: Experiential LearningAn Essence To Address The Skill Gap In The Field Of Analytics And Data Science – Analytics India Magazine

Method of teaching has been a subject of discussion and debate for a long time. The effectiveness of training and skilling has been questioned and deliberated time and again globally, irrespective of the field, stream, sector, or specialisation.

The education sector, of late, has been witnessing a move away from traditional teaching techniques. Rote learning is slowly becoming expendable, especially in practical fields like analytics and data science.

The pace at which these fields are evolving coupled with the rapidly increasing demand globally across all industry verticals has created a significant gap in the right talent supply with the skillset to apply themselves for a given business context and create impact.

While the academic institutes, MOOCs, and the likes are doing a tremendous job in creating awareness and equipping the talent with theoretical concepts and knowledge, there is a gap that is widening around enabling the talent with the right experience to be impactful on-the-job quickly. The rising attrition of experienced talent is adding to the pressure on the system.

There is no doubt that theoretical learning is foundational for analysts and data scientists, but the work entails individuals to critically understand business problems and create innovative solutions. This demands them to be continuous prolific learners, creative thinkers, and quick problem solvers. The way to achieve these desired qualities is through learning by experience.

The learning-by-doing method allows learners to be engaged and actively participate in the learning process by working and reflecting on the projects done. This form of learning is proving to be the most effective in becoming successful in the analytics and data science landscape. We will discuss how one can and should upskill oneself through experiential learning in analytics and data science.

Before delving into the essence of experiential learning, there are two fundamental concepts that every aspiring analyst and data scientist must ingrain themselves with to become successful in the field.

1. Do not believe data without reasoning

Data is the basis for your trendsetting, analysis, prediction, and business solutions. If the data is faulty, the entire project will fail. One must question the existence of the data and reason with the data to ensure its validity and quality before moving on to any other step. For instance, last year, Italy had the highest number of COVID-19 deaths at one point. But a part of this situation owed itself to every death in an Italian COVID-19 hospital being counted as a COVID-19 death, regardless of the real reason. If one were to base ones predictions and trends on just the former statement, the results would be faulty.

2. Do not arrive at conclusions without critically examining the data

Complementing data reasoning this step entails examining the data and its correlation to causation. Go a step further into ensuring that the claims made by the data are backed by facts and information. For instance, citizens in the UK shop more during winter than summer. At face value, this proves seasonal consumer preferences, but in reality, winter coincides with Christmas and New Year sales, pushing customers to go on shopping sprees. Basing your analysis on the first statement would lead to an incorrect business solution.

The most fundamental aspect at which all three streams of analytics, descriptive, predictive, or prescriptive are built on, is the clarity around correlation v/s causation. Several of the analytics and data science applications fail to address business problems due to a lack of critical examination leading to the faulty judgment of interchanging correlation with causation and vice versa.

The methods of teaching and learning are undergoing a significant change in the modern era. The traditional classroom approach, based on the foundations of listening to lectures and reading out of textbooks, is not proving to be successful in readying professionals for todays workplaces. An increasing number of researchers with empirical pieces of evidence is proving the advantage of experiential learning on learners over conventional methods.

Setting the foundation for todays classrooms, Edgar Dales Cone of Experience, or his Learning Pyramid (1940), illustrates how the depth of a persons understanding depends on the medium leveraged and the senses involved in the learning process. Dales research identifies that direct, purposeful, or on-field experiences are the most effective method, resulting in 90 per cent retention of the information. In contrast, it revealed the least effective learning method through presented information like verbal and visual symbols.

As Dale explains, people learn best when they are present in action and learn from their experience. In the world of data science, opening up the learners sensory channels to interact with the information at hand is bound to produce better results. Moreover, analytics and data science are practical fields, entailing practitioners to work on models, deal with data, and make engineering decisions. For instance, a data scientist cannot learn a hackathon solution without brainstorming the possibilities or building an intelligent model right from the textbook.

Experiential learning methodologies and their effectiveness can be illustrated through the essential skills under the hard-skill and soft-skill umbrella in the analytics and data science space. While hard skills provide a foundation for all solutions, soft skills help in creating innovative ideas and communicating them. A nurtured combination of the two is what sets apart a data scientist from their peers.

The need for practitioners to be skilled in the textbook technical concepts to ensure that the best possible analytical approach and models are built, while is necessary, is not sufficient. They need to be seasoned in applying the concepts in real-life problem situations.

The way to develop application-oriented hard skills is to focus on three essential components.

1. Applied knowledge of algorithms

While one may have mastered algorithms, it is essential to know how and when to apply them. There may be instances when one comes across a problem where conventional algorithms dont work. One will need to be fluent in writing a new/heuristic algorithm or creative in tweaking the old ones. Applied knowledge is learned from experience, so one must practice applying oneself in the right way.

2. Translation for business context

Data scientists often work with non-tech-based business professionals to find solutions to business problems or to create incremental business impact. It is paramount for them to understand the business context and translate those to data analytics problems, followed by building the right solution to map the context for timely implementation. This process also requires translating back the solution to business stakeholders in a language that they can comprehend. This is critical not only for a successful implementation of analytical solutions but to also set the stage for continuous improvement for incremental impact. Contextualisation leads to the adoption and growth of data-driven culture within organisations. The skills acquired by one through the experiential learning approach can help with the above endeavour.

3. Programming skills in Python or R

Python or R can handle applications from data mining and ML algorithms to running embedded applications under one unified language. Data scientists need to be skilled in one or both programming languages to be successful in the field. The application-oriented case study-based approaches enabled through experiential learning methodologies enable one towards industry readiness with this skill.

LinkedIns Future of Skills report from 2019 that studied behavioural insights based on millions of data points from member engagement identified soft skills to have increased value in enterprises. This, they reported, is given the expanding application of new technology that is broadening the job expectations for data scientists. The data science industry focuses majorly on hard skills, but it is time we lay enough importance on developing soft skills as well. There are three soft skills that are most important for a data scientist to nurture.

1. Critical thinking & problem-solving

Critical thinking and problem-solving skills assist data scientists in clarifying vague and broad problems. If the dataset has errors or is not understood correctly, the solution will be unsuccessful. Under the experiential learning framework, one can build these skills by participating in hackathons, building models for experimentation, or engaging with data.

2. Effective communication

Once one has solved the problem, it is important to communicate it to the stakeholders effectively. Data scientists inability to communicate with stakeholders is a pressing concern within the industry. If the receiver does not understand the solution, it will not be implemented. Individuals can hone this skill by putting themselves out there, explaining solutions to non-technical people, receiving feedback on it, and working on enhancing the skill with more practice.

3. Agility & flexibility

Agility and flexibility are two skills that are increasingly becoming more important. The agile approach to working empowers data scientists to prioritise and create roadmaps based on business needs and adapt to different goals. Agile individuals are always learning and growing from new practical experiences.

In summary, experiential learning is learning by doing with application orientation and contextualisation. The framework is poised to get wide adoption in the field of analytics and data science globally across enterprises, functions, and academia. The aspirants and practitioners in the field should benefit from the framework to be continuous and prolific learners to upskill themselves in the most effective way and be future-ready.

This article is written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum of senior executives in the Data Science and Analytics industry. To check if you are eligible for a membership, please fill the formhere.

See original here:

Council Post: Experiential LearningAn Essence To Address The Skill Gap In The Field Of Analytics And Data Science - Analytics India Magazine

Read More..

Five UB faculty named SUNY Distinguished Professors – UB Now: News and views for UB faculty and staff – University at Buffalo Reporter

Campus News

The distinguished professor designation is the highest faculty rank in the SUNY system.

UBNOW STAFF

Published December 10, 2021

Five UB faculty members have been named SUNY Distinguished Professors, the highest rank in the SUNY system.

Quanxi Jia, Marianthi Markatou, Janet Morrow, Robert Shibley and Mark Swihart were appointed to the distinguished professor rank by the SUNY Board of Trustees at its meeting on Nov. 9. They were among 11 new Distinguished Professors appointed at the meeting.

The rank of distinguished professor is an order above full professorship and has three co-equal designations: distinguished professor, distinguished service professor and distinguished teaching professor.

The five were all named Distinguished Professors in recognition of their national and/or international prominence and distinguished reputations within their chosen fields. According to SUNY, this distinction is attained through extraordinary contributions to, and impact on, the candidates field of study, often evidenced by significant research and/or creative activity. The work must be of such character that it has the potential to elevate the standards of scholarship or creative activity of colleagues, both within and beyond their academic fields.

UBs nationally and internationally renowned faculty have a tremendously positive impact on our world through their excellent research, scholarship, teaching, creative activities and clinical contributions, says A. Scott Weber, provost and executive vice president for academic affairs. We are extremely proud that five of our most distinguished faculty members have been recognized for their leadership and groundbreaking contributions through appointment to SUNYs highest rank.

UBs newest SUNY Distinguished Professors:

Quanxi Jia, Empire Innovation Professor and National Grid Professor of Materials Research, is an internationally recognized leader in multifunctional and nanostructured materials for energy and electronic-device applications. He has made significant contributions to the development of high-performance superconducting coated conductors, or 2G wires, for electric-power applications. He has invented and pioneered the polymer-assisted deposition, a cost-effective coating technique to grow a vast number of electronic materials.

Jia has also designed and developed innovative approaches to integrate different materials for desired functionalities, making many original and high-impact contributions to nanostructured materials.

He has authored or co-authored more than 500 peer-reviewed journal articles, delivered more than 100 invited lectures and holds 50 U.S. patents.

Jia is an elected fellow of the American Association for the Advancement of Science (AAAS), National Academy of Inventors (NAI), American Ceramic Society (ACerS), the American Physical Society (APS), the Institute of Electrical and Electronics Engineers (IEEE), the Materials Research Society (MRS) and the Los Alamos National Laboratory.

Jia joined the UB faculty in 2016. In addition to his faculty appointment in the Department of Materials Design and Innovation (MDI), a joint program of the School of Engineering and Applied Sciences and the College of Arts and Sciences, he also serves as the scientific director of UBs New York State Center of Excellence in Materials Informatics (CMI), the founding co-editor-in-chief of Materials Research Letters and the principal editor of the Journal of Materials Research.

Marianthi Markatou, professor of biostatistics and associate chair of research and healthcare informatics in the Department of Biostatistics, is an internationally renowned expert in biostatistics, statistics and biomedical informatics. She has conducted seminal methodological research that has significantly advanced the fields of statistical robustness, mixture models, statistical distances, weighted likelihood methods and statistical machine learning.

Markatou has earned a distinguished reputation as an interdisciplinary scholar and has made pioneering contributions to both statistical sciences and domain sciences. Her interdisciplinary work has applied her rigorous statistical methodologies to advance pharmaco-epidemiological and emerging safety sciences research, biomedical informatics such as text mining to support patient safety, and computer science including big data analysis and data science.

Her work has been continuously supported by external funding agencies since 1990, and her influential statistical publications have appeared in highly regarded journals. Her current awards, including a $7 million grant from the Patient-Centered Outcomes Research Institute, the Food and Drug Administration, and the Kaleida Health Foundation, support Markatous pioneering work on the foundations of data science and application to biomedical and public health research.

Among her many honors and awards, Markatou has been named a fellow of the American Statistical Association and the Institute of Mathematical Statistics, and a member of the International Statistical Institute.

Janet R. Morrow, Larkin Chair and UB Distinguished Professor of Chemistry, is a highly regarded expert in the field of metal ion complexes in biology and in biomedical imaging. She is recognized for her invention of transition metal-based MRI contrast agents that have the potential to monitor disease states including bimodal imaging agents, paramagnetic liposomes and self-assembled cages as theranostic agents for imaging drug delivery to tumors.

Early in her career, she carried out highly cited studies on the mechanism for the recognition and sequence-specific cleavage of RNA by metal ion complexes, on the development of luminescence methods to study lanthanide complex catalysts, and the incorporation of lanthanide ions into modified DNA structures. To commercialize recent work on iron MRI contrast agents, she co-founded Ferric Contrast Inc., for which she serves as chief scientific officer.

Morrow has authored more than 120 publications, 12 book chapters and nine patents, and serves as associate editor of Inorganic Chemistry, the premier American Chemical Society journal in the discipline. She is the recipient of an Alfred P. Sloan Fellowship, a Special Award for Creativity from the NSF, and the Schoellkopf Medal from the ACS. She is a fellow of the AAAS.

Robert G. Shibley, professor and dean of the School of Architecture and Planning, is a globally renowned scholar, recognized for his work on the theory and practice of placemaking the way we transform the places we are into places we love.

He has authored more than 120 publications, including 17 books and 15 book chapters. His productivity is all the more impressive given the significant administrative roles he has held, first as chair of the Department of Architecture during the 1980s, and as dean for the past decade.

Shibley has worked as principal investigator with faculty, staff, students and collaborating partners on over 80 Buffalo-based projects totaling more than $25 million in sponsorships. The work has received global attention for its impact on the people and places of Buffalo, and its elevation of practice-based research in architecture and urban planning. The work has also led to top national awards in his disciplines, including an American Institute of Architects (AIA) Thomas Jefferson Award and induction into the College of Fellows in the AIA, and the American Institute of Certified Planners.

He is the recipient of the UB Presidents Medal, the New York State AIA Educator of the Year Award, and 45 other international, national and regional honors for outstanding design and planning projects, as well as additional lifetime achievement awards.

Mark Swihart is UB Distinguished Professor, chair of the Department of Chemical and Biological Engineering, and Empire Innovation Professor in the UB RENEW Institute. He is recognized globally for developing new nanoscale materials, engineering practical processes for producing unique materials, and generating fundamental understanding of those processes. Nanoscale materials exhibit size-dependent properties and functions that enable new high-impact applications from biomedical imaging to renewable energy, and Swihart has made key contributions to the synthesis and post-processing of these materials. His research group has developed methods and materials that have been adopted by researchers and industry worldwide.

He has published more than 285 manuscripts on these subjects, which have been cited roughly 20,000 times. He co-authored the two most recent editions of the best-selling chemical engineering undergraduate textbook of all time and has co-founded two startup companies. He is a fellow of AIChE and AAAS, and has been recognized with the Schoellkopf Medal from the American Chemical Society and the Whitby Award from the American Association for Aerosol Research.

Swihart has led multiple campus-wide initiatives, including the UB 2020 Strategic Strength in Integrated Nanostructured Systems and the New York State Center of Excellence in Materials Informatics, and has collaborated broadly within and beyond UB, promoting interdisciplinary and convergent research approaches.

The rest is here:

Five UB faculty named SUNY Distinguished Professors - UB Now: News and views for UB faculty and staff - University at Buffalo Reporter

Read More..

Athinia to Accelerate the Use of AI and Big Data to Solve Critical Semiconductor Challenges – PRNewswire

Athinia will bring semiconductor manufacturers and materials suppliers together to share, aggregate, and analyze data to unlock efficiencies. The platform will also enable industry actors to understand on a deeper level the interaction between materials and processes at semiconductor fabrication plants.

"The semiconductor industry is facing unprecedented disruption. This has created a critical need for a secure data collaboration platform that can provide the transparency and data intelligence companies need to solve challenges such as chip shortages and supply chain issues," said Kai Beckmann, Member of the Executive Board of Merck KGaA, Darmstadt, Germany, and CEO Electronics."Partnering with Palantir, we've combined our collective expertise in materials science, data analytics and security to increase our customers' efficiencies and time to innovation."

"We are excited to partner with market leaders in this space to create an ecosystem that will enable semiconductor companies and their suppliers to collaborate to make better decisions, combatting simultaneous demand and supply shocks," said Palantir COO Shyam Sankar. "Athinia will help companies across the value chain bring new products to market faster and accelerate their product differentiation and growth."

By harnessing Palantir's unique experience in building inter-organization ecosystems, Athinia will enable advanced data analytics, in turn limiting the costly impact of quality or performance excursions across the value chain, from supplier to semiconductor fabrication plants. It will also help fabs manage faster innovation in manufacturing processes in a single, secure platform that will support improved incoming material quality and increase supplier engagement. Suppliers will benefit from internal efficiency gains through smart data integration and can be a better partner for the fabs they serve. The partnership will help solve such challenges by creating a platform to analyze previously siloed data in a holistic way.

Merck KGaA, Darmstadt, Germany, and Palantir have already proven to be successful in using collaborative data analytics to help common customers minimize quality deviations and increase efficiencies. Athinia will build upon this experience and leverage Palantir's expertise to help customers improve their decision-making in optimizing semiconductor materials.

Merck KGaA, Darmstadt, Germany, has recently worked with leading semiconductor companies to leverage AI and data analytics for solving key challenges. "We worked with Merck KGaA, Darmstadt, Germany, to create a data sharing platform that enabled advanced predictive manufacturing for chemical mechanical polishing (CMP), a critical step in the semiconductor manufacturing process. Through this collaborative partnership, we implemented an AI-driven methodology to enable smart data collaboration that drove process and quality improvements. By extending this approach to the broader supply chain and enabling a data ecosystem, we believe advanced predictive manufacturing can be accelerated for the broader semiconductor industry,"said Raj Narasimhan, Corporate Vice President, Global Quality, Micron Technology, Inc.

The Athinia platform is powered by Palantir Foundry, which enables users to structure and analyze data from disparate sources, generate powerful insights and support operational decisions, all while helping to ensure that sensitive data is processed in accordance with applicable data privacy rules, regulations, and norms. Palantir Foundry is designed to provide world-class security, access controls, partitioning, auditing, and accountability functions to support responsible data use. Athinia acts independent from the Electronics business sector of Merck KGaA, Darmstadt, Germany, and enables data sharing only on codified and anonymized data and customers will retain full control of their data, including intelligent purpose-based access control management. The secure data collaboration environment will provide continuous feedback through a holistic view and a common operating picture of in-fab performance that can help solve quality deviations.

Merck KGaA, Darmstadt, Germany, and Palantir already started collaborating in 2017. Through the partnership "Syntropy"both companies are determined to unleash the power of biomedical data and revolutionize cancer therapy and accelerate research. Syntropy's aim is to provide researchers with intuitive analytics techniques to enable them to aggregate, analyze and then also share data from disparate sources.

For more information about Athinia, visit the website or social media channels: LinkedInYouTubeTwitter

All Merck KGaA, Darmstadt, Germany, press releases are distributed by e-mail at the same time they become available on the EMD Group Website. In case you are a resident of the USA or Canada please go to http://www.emdgroup.com/subscribe to register for your online subscription of this service as our geo-targeting requires new links in the email. You may later change your selection or discontinue this service.

About Merck KGaA, Darmstadt, Germany

Merck KGaA, Darmstadt, Germany, a leading science and technology company, operates across healthcare, life science and electronics. Around 58,000 employees work to make a positive difference to millions of people's lives every day by creating more joyful and sustainable ways to live. From advancing gene editing technologies and discovering unique ways to treat the most challenging diseases to enabling the intelligence of devices the company is everywhere. In 2020, Merck KGaA, Darmstadt, Germany, generated sales of 17.5 billion in 66 countries.

The company holds the global rights to the name and trademark "Merck" internationally. The only exceptions are the United States and Canada, where the business sectors of Merck KGaA, Darmstadt, Germany operate as EMD Serono in healthcare, MilliporeSigma in life science, and EMD Electronics. Since its founding in 1668, scientific exploration and responsible entrepreneurship have been key to the company's technological and scientific advances. To this day, the founding family remains the majority owner of the publicly listed company.

About Palantir Technologies

Palantir Technologies Inc. builds and deploys operating systems for the modern enterprise. Additional information is available at http://www.palantir.com.

Who dares, wins.

Forward-Looking Statements

This press release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. These statements may relate to, but are not limited to, Palantir's expectations regarding the terms and the expected benefits of the strategic partnership. Forward-looking statements are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Forward-looking statements are based on information available at the time those statements are made and were based on current expectations as well as the beliefs and assumptions of management as of that time with respect to future events. These statements are subject to risks and uncertainties, many of which involve factors or circumstances that are beyond our control. These risks and uncertainties include our ability to meet the unique needs of our customers; the failure of our platforms to satisfy our customers or perform as desired; the frequency or severity of any software and implementation errors; our platforms' reliability; and our customers' ability to modify or terminate their contracts. Additional information regarding these and other risks and uncertainties is included in the filings we make with the Securities and Exchange Commission from time to time. Except as required by law, we do not undertake any obligation to publicly update or revise any forward-looking statement, whether as a result of new information, future developments, or otherwise.

SOURCE Athinia

Here is the original post:

Athinia to Accelerate the Use of AI and Big Data to Solve Critical Semiconductor Challenges - PRNewswire

Read More..

Aktify Announces New Chief Product Officer and New Chief Technology Officer to Expand Team and Product Offering – Yahoo Finance

SaaS veterans Chase Rigby and Dave Barney will advance Aktify's suite of revenue-building products

LEHI, Utah, Dec. 10, 2021 /PRNewswire/ -- Aktify, a conversational collective intelligence (CI) platform for enterprise businesses, recently announced the appointment of SaaS-industry product powerhouse Chase Rigby as Chief Product Officer and tech talent Dave Barney as Chief Technology Officer.

Working in tandem, Barney will guide engineering and data science teams at Aktify, while Rigby will spearhead the product development and design. Their leadership will further develop Aktifys science-first approach to cutting-edge conversational AI.

Barney and Rigby's leadership will further develop Aktify's science-first approach to cutting-edge conversational AI.

Rigby has vast experience creating and enhancing enterprise SaaS software. In his six years at Google, he led various teams at Niantic Labs, Search & Assistant, and ML reengagement for ads across YouTube, and Chrome. He also led Google's Social Good team, processing payments for mining public data to predict public health epidemics that aided federal and NGO relief efforts.

Barney spent more than a decade at Google developing machine learning and digital attribution models, teaching ML classes to fellow Google engineers, and leading fulfillment efforts in its global support organization. Prior to Google, Barney spent a decade working at various ML/AI start-ups, where he came to know and understand the intricacies of building strong ML teams in a start-up environment.

Most recently, Rigby and Barney rebuilt the technology, product, stack, and culture at Kanopy, an enterprise video streaming platform. Their efforts resulted in the acquisition of the company by KKR and OverDrive in June 2021.

Working in tandem, Barney will guide engineering and data science teams at Aktify, while Rigby will spearhead the product development and design. Their leadership will further develop Aktify's science-first approach to cutting-edge conversational AI.

"Aktify has experienced extraordinary growth in the last year," said Rigby. "When you consider its technology, people, and market opportunity, that progress makes sense. There's much more room to grow, and I'm excited to be a part of it."

Story continues

Barney and Rigby's appointment comes at a pivotal time for Aktify. The company continues to generate previously unseen revenue for its clients. It is onboarding an ever-increasing number of new customers. The dynamic duo will help innovate products and develop the technology to meet future customers' needs.

Aktify's product, data science, and engineering teams develop conversational AI technology. The AI has increased sales and meetings across various industries.

"Chase and Dave's expertise will allow us to take not only Aktify, but the entire conversational AI space to the next level," said Aktify CEO Kreg Peeler. "They join the other powerhouses in our company to create a product and a long-overdue solution for sales needs. We expect big things to come shortly."

About Aktify

Aktify is a conversational intelligence company. It uses robust data science and machine learning to delight customers with thoughtful conversations and well-curated gestures. Aktify's solutions are invisible, integrating with CRMs, and marketing automation platforms. It creates millions of weekly conversations through SMS and phone calls, delivering a 10x ROI to clients. Headquartered in Menlo Park, California, and Lehi, Utah. To learn more, visit aktify.com.

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/aktify-announces-new-chief-product-officer-and-new-chief-technology-officer-to-expand-team-and-product-offering-301442493.html

SOURCE Aktify

Read more from the original source:

Aktify Announces New Chief Product Officer and New Chief Technology Officer to Expand Team and Product Offering - Yahoo Finance

Read More..

Implementing AI: Bridging the Gap | ARC Advisory – ARC Advisory Group

Summary

Implementing Artificial Intelligence (AI) has become a key challenge for organizations looking to create a competitive advantage through their data. Introducing a new technology (and associated process changes) is always a demanding task. According to ARC research, around 50 percent of respondents consider themselves to be in the piloting phase of an AI implementation. End users also expect 40-50 percent of all industrial applications to leverage AI by the year 2030. To reach this lofty goal, they need technology partners to help them address the main challenges that come with AI implementation: bridging the gap between multiple stakeholders and dealing with data. This ARC View will show how the use of RapidMiner as a partner for AI projects can help end users address these two key challenges.

Implementing an AI application is still a relatively new concept for many end users. As with any new technology or strategy, every step from proof-of-concept to lifecycle management can seem daunting. ARCs continuous research on artificial intelligence in manufacturing has uncovered the most common challenges that organizations face in their AI projectsyou can see the results in the chart below.

The blue bars represent challenges that are not just unique to end users, but to their organizations as a whole. In the case of AI, these challenges are amplified by the fact that it is such a cross-functional technology and has to address the needs of plant floor workers, data scientists, business executives, and many other groups. At times, the interests and goals of these groups can vary strongly, leaving many gaps to bridge.

The orange bar represents a common blocker thats preventing many organizations from even getting started with AI. Anyone who works with data knows that you must clean and prepare it to be usable for analysis. For those who arent lucky enough to have processes in place for data prep, this process can consume a lot of time and resources while having significant implications for the success of a project.

Still, as mentioned above, most of the respondents in our survey expect AI to become an integral part of production through 2030. Research also shows that early adopters of AI can typically expect a faster ROI and greater impact to their bottom line. So, what does it take to be part of the early adopter group?

The gap in question is characterized by different groups, their goals, and varying levels of knowledge in data science and manufacturing, respectively. The knowledge of these groups is often tribal knowledge that is only passed on within the group. As Sarma Malladi from the Swiss engineering and manufacturing company SWM International puts it, all manufacturers face the problem of tribal knowledge.

To bridge the gap, strong management, leadership, and technology are needed. In ARCs view, these fundamental internal requirements must be supported with a suitable tool that fulfills the following crucial demands:

RapidMiners tools specifically address the skills gap issue and support the need to bridge that gap.

The tailored user interfaces enable both beginners and experts to work with the same data. For example, beginners can use the AutoML solution RapidMiner Go to create basic models in just a few clicks, while experts can custom-code their own functions and share them with teammates using RapidMiner Notebooks.

This is all done on a single version of truth, e.g. a common database. This helps to bridge the aforementioned knowledge gap, as OT people from the plant floor can work with data quickly and create their own insights. Typically, the first aim is to re-create existing views and test the system against tribal knowledge. Then, after trust is built up, new insights and productivity gains follow.

In the following case study from the electronics industry, data from customer support was used by the data science teamgiven the size of the organization, the data represented millions of customers around the world.

As is the goal with any data science project, customers arent aware of whats happening behind the scenestheyre simply served better as a result of the right model being implemented. This case study below also shows how AI impacted the post-sales department as well as the production of spare parts. All parties involved rely on and trust the predictions of their developed machine learning models.

The organization that implemented this use case is a well-known, leading electronics manufacturer for the consumer and professional markets. Its diversified business includes consumer and professional electronics, gaming, entertainment, and financial services. The company needed to reduce overall customer support costs and tasked the data science team in their post-sales organization with achieving that goal.

The main project owner and their data science team understood the basic customer support statistics -- how many people called, how long people stayed on the phone, how many people visited the support website etc., but it was more difficult for the team to determine why people were calling. The reason for their lack of understanding was that the vast quantities of unstructured data that could help them had not previously been used.

A first step towards deeper analytical insight and greater business value was to focus on classification analyses -- why people are calling and document it in as much detail as possible (reasons and multiple layers of sub-reasons). To do this, the team first had to automate many of their existing business processes. An example of this was the translation process. With RapidMiner, the team could create workflows that allowed unstructured data in 26 different languages to be routinely translated for easier interaction and analyses.

This electronics manufacturer first used RapidMiner for web and text mining to support their classification analysis, which allowed them to identify trends and the reasons behind customer service calls. Today, the team is moving on to do more powerful analysis with RapidMiner, such as:

In a recent panel discussion at the ARC Europe Forum, it was stated that 80 percent of the work on AI is not about AI itself. One of the major challenges is lack of data and poor data quality. Imagine having to connect 40 years of equipment usage in a brownfield plant to gain access to the necessary data. Even if youre successful, the resulting data will likely vary in completeness, collection frequency, units, accuracy, availability, etc. Its also likely that most of the data has not been labeled correctly, which creates even more work.

Over the years, ARC has done a lot of research into the way organizations typically approach the problem of unlabeled data. Most rely on internal experts, often supported by some sort of tool, which can range from excel templates to more sophisticated software. This time-consuming process often increases the true cost of a project.

To support the data preparation and labeling, RapidMiner offers their data preparation tool Turbo Prep, which helps address the issue of inconsistent data. Its supporting functions are divided into five broad categories:

In this case study, the electronics manufacturer first experienced benefits by simply using the translation function. It created insights and tangible benefits without even getting to the core of AI.

The end user also mentioned that previously, the data science team operated in an 80/20 environment 80 percent of its time was spent on collecting and managing data, and only 20 percent on analyzing it. Now that all the tedious tasks of data cleansing and collection are automated with RapidMiner, the company flipped this ratio -- 80 percent of their time is spent on in-depth data analysis, and only 20 percent on collecting and managing data.

ARC has experienced it often in the past: End users and machine builders do not implement any new technology unless they understand it fully and are convinced it can help them achieve their goals. This has often resulted in the development of proprietary, in-house solutions, which can be effective but is extremely time and resource intensive. After solutions of this nature are implemented, they often stay in operation for sometimes up to 40 years, resulting in huge lifecycle costs.

While it is certainly true that an end user or machine builder needs to understand the technology fully as they are responsible for the safe operation of plants and equipment the right platform (such as RapidMiner) combined with industry expertise can help to kickstart the process, accelerate implementation, lower lifecycle costs, and even create opportunities that go well beyond the initial scope that your organization envisions.

ARC Advisory Group clients can view the complete report atARC Client Portal

If you would like to buy this report or obtain information about how to become a client, pleaseContact Us

Keywords: Artificial Intelligence, AI, Machine Learning, AI Implementation, RapidMiner, Data Conversion, Anomaly Detection, ARC Advisory Group.

Follow this link:

Implementing AI: Bridging the Gap | ARC Advisory - ARC Advisory Group

Read More..

What Agencies Need to Do to Combat Shadow IT Driven by Cloud Sprawl – Nextgov

Migrating to the cloud offers federal agencies huge advantages in performance and flexibility. Government services cant effectively scale or adopt new capabilities like big data analytics, artificial intelligence, machine learning and internet of things without migrating to the cloud. But government cloud adoption has empowered an old IT nemesis: shadow IT.

Shadow IT is the use of IT systems, devices, software, apps and services outside the supervision of an organizations approved IT systems. In the past, shadow IT was typically a business unit creating their own locally developed applications, or LDAs, because the office of the chief information officer engagement was judged too onerous. During my time in public service, I saw personnel surreptitiously use Microsoft Access to address an urgent data processing need that inadvertently turned into a mission-critical mission system. This was only discovered when Microsoft Access reached its scaling limits and then turned into an emergency project to transform it into a web-based application.

Building LDAs is even easier when using cloud services. This opportunity for shadow IT is exacerbated by government mandates to move to the cloud prior to the development of a governance structure that can monitor and manage such a move. Combine all this with the very human tendency of development teams to experiment with creating cloud resources and not clean up after themselves, and the result is more shadow IT and cloud sprawl.

Cloud sprawl is inefficient use of the cloud: over-provisioned, over-scheduled, underutilized or orphaned cloud assets. It often happens when development teams spin up new cloud resources, forget about them, then move on to the next urgent task. Even when cloud servers are terminated, the servers storage volumesin a sense virtual hard drivesare often left behind. This creates orphaned cloud resources.

Teams also size cloud resources too large based upon the legacy technical specifications coming from on-prem data centers, instead of starting small and using cloud elasticity for auto-scaling. This results in over-provisioned and underutilized resources. This cloud sprawl increases costs and often leads to overruns in government program budgets.

Cloud sprawl and the related lack of governance can also make agencies more vulnerable to data breaches. When development teams create cloud resources, they may not fully understand the impact of its related configurations, as was the case in the 2019 Capital One data breach that enabled access to sensitive records stored in Amazon Web Services S3 buckets. To mitigate the risk introduced by misconfigured cloud resources, agencies need to define cloud usage standards and implement ways to monitor compliance to those standards.

Effective implementation of AIOps is the answer to modern-day shadow IT and cloud sprawl. Heres the Gartner definition: AIOps combines big data and machine learning to automate IT operations processes, including event correlation, anomaly detection and causality determination.

One cloud-centric AIOps solution is robotic cloud automation, or RCA, a suite of AIOps capabilities that establishes governance guardrails and enforces usage standards across multiple cloud environments. For critical standards compliance issues, it can also remediate the non-compliance findings by bringing cloud resources back into the desired state configuration. This delivers significant cost savings and security improvements through automated monitoring, reporting and remediation of compliance issues.

For all enterprise cloud hosting teams, the first step to regaining control is to define your standards. When agencies are considering which standards to establish, they should embrace established industry standards. RCA is aligned with some of the most widely respected standards in the industry, including Center for Internet Security Benchmarks, NIST 800-53 and AWS Foundational Security Best Practices. These provide baseline standards to start from, including hundreds of configuration guidelines to safeguard cloud environments against todays evolving cyber threats.

As mentioned above, for many agencies the genie is already out of the bottle. Cloud adoption preceded a management structure, and teams have already created the cloud sprawl and violated security best practices. In such cases, RCA deployment follows a predictable iterative implementation pattern by first enabling monitoring and reporting to understand the depth and breadth of the compliance challenges. Then agencies need to drive effective communication and change management strategy that engages the cloud users, to adopt the new cloud standards and iteratively drive improved compliance.

Once fully compliant with a standard, RCA can enable automated remediation, which locks-in future compliance by maintaining the desired state configuration of cloud resources in perpetuity. For example, for every new server spun up in the cloud, RCA evaluates compliance to three core configurations: proper tagging, encryption and standardized security group usage. If the server fails any of these tests it is automatically terminated. Cloud sprawl is nipped in the bud. Its truly governance as code.

RCA is a powerful enforcement tool for any CIO managing a multitenant cloud environment. Yet critically, its not enforcement in the old, top-down model of the past. RCA provides AIOps that enable teams to own more of the security responsibility because a cloud hygiene baseline is baked into the system. Agencies can save millions by embracing AIOps, shutting down existing cloud sprawl, and preventing it from happening again in the future.

Gone are the days when one central IT team could support 20, 40, 100 separate development groups. It simply isnt possible due to the complexity of cloud service offerings, even if government agencies had the budget and the talent pool to attempt it.

I do understand the lingering appeal of the do it ourselves approach. I remember 10 years ago wondering if government could truly trust the big cloud service providers to support agency infrastructure and mission. That question has been definitively answered: yes. The cloud provides incredible capabilities to agencies we couldnt imagine a decade ago. For example, the CSPs have perfected automated database failover in their managed database products that enable reliable and consistent failover in minutes.

Long gone are the days of engineering database synchronization and manual failovers. Now RCA enables AIOps for government to eliminate shadow IT, cloud sprawl and securely explore the potential of the cloud.

Aaron Kilinski is co-owner and chief technology officerof Simple Technology Solutions.

Read the original post:
What Agencies Need to Do to Combat Shadow IT Driven by Cloud Sprawl - Nextgov

Read More..

Nvidia CEO Huang jointly files patent for software tech in the metaverse – The Register

Nvidia's CEO Jensen Huang continues inventing, as if his role in the rise of GPUs wasn't enough.

A patent application published on December 2 credits Huang as one of the inventors of a system to open and share a file in the cloud without the need for a corresponding application on local devices.

Instead, the opened file is encoded and presented through a video stream, with everything happening in the cloud. To be clear, the application is a continuation of filings and patents granted dating back to 2012 related to graphics processing in the cloud and network-attached GPUs. The new patent hasn't been granted yet.

Names of company CEOs are often attached to patents as it adds legitimacy to an invention and makes it easier to defend in court. Steve Jobs' was named in over 300 patents, for example.

The patent application, called "titled method" and apparatus for execution of applications in a cloud system," was filed in August this year and published this month.

The patent, if granted, could be a key cog in Huang's vision to move computing into the metaverse, specifically in engineering and scientific computing. It's like opening a CAD/CAM file via a cloud application and presenting it to users via a headset or mobile device, who can then manipulating the file within the video stream.

Headsets can already do two-way communication with cloud servers, but Nvidia is proposing a novel technique. The patent involves a cloud server receiving a file identifier from a device, pulling that file from a server, finding the relevant application in the memory of a cloud server, and then "executing the application as a video stream destined for the client device."

The patent filing seems relevant for applications in which engineers collaborate in real-time on the design of machines and equipment via the metaverse.

The patent document takes a hack at the disadvantages of desktop engineering software, which typically require powerful computers. Many devices also still are also created via a modular approach, with engineers creating different blocks of an overall design, and patching them together.

Nvidia has shown many videos of engineers or scientists collaborating in the cloud through video streams.

CAD/CAM software already use GPUs for simulation and design. The metaverse may be great for simulation, but pushing engineers into a cartoon interface to collaborate on design may be a challenge. Either way, Nvidia's doing all it can to sell more chips.

The rest is here:
Nvidia CEO Huang jointly files patent for software tech in the metaverse - The Register

Read More..

Gill St. Bernard’s chess team ranked 8th in nation – New Jersey Hills

PEAPACK-GLADSTONE - Gill St. Bernards juniors Peter Horton of Chatham, Akshay Patel of Warren and Tyler Tellalian of Far Hills traveled to Orlando, Fla., from Dec. 2-6 to test their chess technique against the best in the nation at the U.S. Chess Federations National Championship.

The trio secured their spot in the competition after an impressive first place win in the 11th Grade New Jersey Chess State Championship.

Coached by of Ian Prevost, the Gill Upper School computer science teacher and Chess Club faculty advisor, they returned home ranked eighth in the nation.

The accomplishment was impressive considering the lack of meaningful practice opportunities available to the boys during the pandemic.

Like with many extra-curricular activities, face-to-face playing time was limited over the past year and a half, chess competitions were postponed, and the bulk of training was completed online.

Except for playing against members of the GSB Chess Club, the boys had not been exposed to the endurance or concentration required for competitive play.

We prepped for two to three hours each night leading up to the competition, but we could have used more, Patel said. The competition was brutal.

We played two rounds on Friday, three on Saturday, and two more on Sunday, Horton explained. There were over one thousand kids in the room, and each match had the potential to last three hours.

That adds up about to nine hours of chess a day.

The tense hours spent pouring over the board did not dampen the boys spirits, and in fact, it inspired them to want more. All three plan to continue training in whatever fashion they can and hope to return to the national stage again next year and place in the top three.

While the goal may seem ambitious, club advisor Prevost said he believes they have what it takes.

Its been amazing to see how much progress theyve made in the previous two years, he said. Over COVID, they each gained 500 rating points in strength, which places them in the top 5 percent of players their age who are playing online.

The boys are proud of the teams success but are even more grateful to have carried on the GSB legacy.

This is the second time in four years that GSB has been to nationals, said Tellalian. We have a lot of history in chess, and we want that to continue in the future.

GSB last earned a spot at nationals in 2018, and that talented trio, all seniors, returned home with a second place trophy.

Based upon this pattern of success, the boys dream of building a lasting GSB chess dynasty is within reach, and the recent excitement will carry the team well into the next championship season.

Read the original post:
Gill St. Bernard's chess team ranked 8th in nation - New Jersey Hills

Read More..