Page 2,792«..1020..2,7912,7922,7932,794..2,8002,810..»

Its Time To Break Glass On Cybersecurity Urgency – Forbes

Recent high profile ransomware attacks have illuminated the need to strengthen cybersecurity ... [+] measures.

If it seems that cybersecurity attacks are on the rise, you are not mistaken. When this year is in the books, every industry report will show how the frequency of attacks escalated exponentially again while the cost of attacks became higher than ever before. We are less than five months into 2021, and with the Colonial Pipeline attack, we already witnessed one of the costliest cyberattacks ever. The cycle for major cyber incidents has now become a matter of daysnot weeksas we witness continued major ransomware attacks, data loss, major breaches, as well as intelligence and industry warnings. If there was ever a time to get serious about cybersecurity, that time is now.

On May 7, 2021, Colonial Pipeline, an American oil pipeline that carries gasoline and jet fuel to the Southeastern United States, suffered a ransomware cyberattack that impacted the essential equipment managing the pipeline. The impact was so severe that it led to emergency declarations from President Biden, as well as the Governor of Georgia.

While the Colonial Pipeline hack has been well documented in terms of multi-billion dollar impact to the economy of the US, consider the recent major Microsoft Exchange platform vulnerability that has been cycling in the news as well. The industry struggled with remediation on this issue because tens of thousands of customers use this software suite and the platform was specifically targeted due to its wide base. Known to the industry as the HAFNIUM incident, the name comes from a state sponsored cyber espionage group out of China that has been profiled as the actors behind the vulnerability. Once infected, affected servers allowed remote code execution and untrusted network activity, even after some of the existing patch updates.

Another incident that comes to mind is a recent security breach that affected an industry-wide networking equipment and Internet of Things (IoT) devices provider known as Ubiquiti. In early January 2021, the company started notifying customers about an unauthorized access issue found on the management services for Ubiquiti systems. Months later, the understanding of the impact has grown to include loss of root credentials for cloud services, databases, private cryptographic encryption keys, and more for thousands of direct and indirect clients.

And just a few short months ago, the SolarWinds supply chain software attack shocked many throughout the industry. Once again, the scale of impact was thrust upon thousands of companies. With each passing day, the threat is becoming more real than ever before.

The pandemic, and its resulting changes to the business world,accelerated digitalizationof business processes, endpoint mobility and the expansion of cloud computing in most organizations, revealing legacy thinking and technologies, according to Peter Firstbrook from Gartner. Old technologies and antiquated processes are definitely to blame. But also far too often, we witness the adoption of principles where ransomware victims just pay up. As many as a third of businesses in 2021 that reported a ransomware attack decided to pay the ransom. Paying ransom demands encourages more hackers and the statistics show that not only do hackers come back to attack businesses that paid, less than 10 percent of the data that is paid out is ever completely recovered. Paying for crime doesnt pay off and it is a glaring example of poor preparedness and lack of strategy. The guard cannot be let down as millions of people continue to be impacted by these issues daily. Cybersecurity incidents are creating a bigger impact on the economy than many people have realized as evidenced in the recent ransomware attack on the Colonial Pipeline which shut down the Eastern Seaboard. We need to evaluate what we are collectively doing right and what we are doing wrong.

There are, however, positive steps that can be done immediately. We can apply the best of what we know to deal with these significant threats.

The first elementary step is to do some widespread cleanup. Get rid of all instances of default passwords, all of those passwords you think cannot be changed and all of those strange devices and components that do not have any passwords in place. Even if its on your private network, everything can be a vector and hackers know it.

Next, Enterprise IT needs to come to terms with our collective lazy nature. This means going through every component of an environment that is old, that was set up before anyone knew better or was set up with the focus on convenience or speed. These are classic weakest link scenarios, and they are lying around everywhereno environment is above these missteps.

Enterprises need to support greater cybersecurity urgency now, review security planning and embrace the leading principles of comprehensive cybersecurity. Ultimately, the price to be paid is unwavering diligence and a hyper-focus on better comprehensive security starting with protecting the castle, recovering from a breach and then assurance that future attacks cannot be detrimental.

You can protect your assets and organization by following and looking for solutions that focus on:

You can plan for a recovery from a breach by implementing:

You need to have assurance that your infrastructure is truly protected by routinely conducting:

Only when we take this trinity of protection, recovery, and assurance are we able to reduce risks substantially and beat the bad actors.

Based on the continual cycle of breach information continues to emerge, there is no reason to hold back on cybersecurity planning and budget. Organizations need to push the pedal on evaluation, assessment, monitoring, and contingency planning, and shift their mindset to always assume a breach is underway. Not only should organizations break the glass and get their security playbook in full swing, but also break the bank to fund it.

Go here to read the rest:
Its Time To Break Glass On Cybersecurity Urgency - Forbes

Read More..

Why Data Scientists and ML Engineers Shouldn’t Worry About the Rise of AutoML – Datanami

(Sdecoret/Shutterstock)

Low-code and no-code development tools are becoming increasingly popular, and the pandemic only accelerated this trend. When we think of low-code or no-code development, were usually referring to tools that allow a non-software-engineer to create a digital app (or workflow) in a plug-and-play manner that doesnt require extensive technical knowledge.

But the idea of low-code or no-code engineering also extends to tools for machine learning and data scienceand today, were seeing a proliferation of options in this category, too, sometimes referred to as AutoML. As with low-code dev tools, the allure of these offerings is that they enable businesses to implement data science and ML workflows without needing the resources or expertise to build them from scratch. AutoML tools allow a user to input a dataset and then, with minimal data science knowledge needed, deploy a model to run over the data and generate results. Its tempting to think that AutoML fully breaks down the barriers to AI, allowing anyone to do this type of work, but for reasons Ill explain later, thats not really the casequite the opposite, in fact.

AutoML does have some potential benefits. This article from Deloitte notes two advantages in particular. The first is increased productivity for data scientists, who can speed up specific steps of the ML lifecycle through automation. This will ultimately enable data scientists to increase their value contribution to the business and to focus on more complex problems.

A second benefit is enabling non-technical business leaders to gain some access to ML, which makes particular sense in the context of the well-documented demand for data scientists. Some have speculated that AutoML might ease the talent crunch for data scientists, if it does in fact allow existing employees to do the same type of ML work without specialized training. Amid COVID-19 cost-cutting, questions have been raised about whether demand for data scientists would begin to cool, especially since its a field that can struggle to show clear ROI in some business settings. How will the rise of AutoML fit into the mix?

Just A Rather Very Intelligent System (J.A.R.V.I.S.) was originally Tony Starks natural-language user interface computer system (Image courtesy Marvel Cinematic Universe)

I do think that AutoML will impact the data science field. As AutoML tools become more widespread, well see a corresponding increase in ML adoption among businesses. For a long time, enterprise ML was the provenance of the fewtech giants, innovative startups, and traditional businesses that were large enough to fund in-house AI centers. Tools like AutoML will make basic ML models and outputs more accessible to other types of companies. This doesnt mean that the neighborhood florist is going to suddenly have a system like J.A.R.V.I.S. running the place; as an article from McKinsey rightly notes, at present, the technology is best suited to streamlining the development of common forecasting tasks.

As AutoML increases enterprise ML adoption by lowering the barriers, enterprises will in fact find that they have a greater need for expert data scientistsnot a reduced one. As organizations adopt more and more ML technologies and their use cases become more specific, theyll outgrow the one-size-fits-all approach. At that point, theyll need qualified data scientists and ML engineers to help continue on a growth trajectory. This is true not only because of the limitations of AutoML, but also because of the need for human oversight to account for ethical concerns like bias as ML usage becomes more prevalent.

Additionally, ML workflows are not typically a set it and forget it process: as dynamic forces of business change over time, data drift or concept drift may cause ML models to become less accurate. A skilled data scientist can detect and correct for these types of problems; they can also improve the overall model function by adjusting the training data as needed, to avoid the classic garbage in, garbage out scenario. While AutoML can improve access to basic ML workflows that a business can build on, experienced data scientists are needed to enable peak performance of those workflows and to provide the most nuanced, useful interpretation of results.

The reality is that well never automate away the need for data scientists, even if we do automate some of their tasks or improve accessibility to basic ML workflows for non-technical business people. If anything, growing adoption of AutoML will drive increased need for real, live data science expertise. Putting companies on a more equal footing in terms of their ability to incorporate ML into their businesses is a good thing, as are efforts to further democratize data science and AI. But well always need expert data scientists to guide implementations, especially as they become more use-case-specific or begin to more directly impact the public.

About the author: Kevin Goldsmith serves as the Chief Technology Officer for Anaconda, the data science platform that has more than 25 million users. In his role, he brings more than 29 years of experience in software development and engineering management to the team, where he oversees innovation for Anacondas current open-source and commercial offerings. Goldsmith also works to develop new solutions to bring data science practitioners together with innovators, vendors, and thought leaders in the industry. Prior to Anaconda, Kevin served as CTO of AI-powered identity management company Onfido. Other roles have included CTO at Avvo, vice president of engineering, consumer at Spotify, and nine years at Adobe Systems as a director of engineering. He has also held software engineering roles at Microsoft and IBM.

Related Items:

Hiring, Pay for Data Science and Analytics Pros Picks Up Steam

AutoML Tools Emerge as Data Science Difference Makers

Whats the Difference Between AI, ML, Deep Learning, and Active Learning?

The rest is here:

Why Data Scientists and ML Engineers Shouldn't Worry About the Rise of AutoML - Datanami

Read More..

Businesses need to show data science isnt dull, it can be fun and rewarding – ComputerWeekly.com

This is a guest blogpost by Libby Duane Adams, chief advocacy officer at Alteryx.

In todays business environment, data is key to success. With over 2.5 quintillion bytes of data created each day, data-driven insights are the main driver in every major business decision and are essential to discovering more efficient processes, reduction in risk or new sources of revenue.

However, harnessing the power of data continues to be a challenge, due to the on-going shortage of data science skills in the labour market, as demand for digital skills still far outstrips the supply. A recentUK government report found that nearly half of businesses (46%) have struggled to recruit for roles requiring hard data and analytics skills.

IDC estimates that by 2025 well have created more than 175 zettabytes globally. As the world of business continues evolving, companies are moving fast and need fast solutions they can no longer tolerate knowledge workers, delivering low strategic output from legacy tools for the enterprise. The sheer abundance of data and its growing complexity means data skilled workers able to harness it for fast and sound decisions will be at the forefront of the job market throughout the next decade.

While not every worker needs to become a data scientist, many businesses are turning to upskilling their employees to overcome this shortage. Building their own internal pool of talented data workers with the skills, desire, knowledge, and analytical expertise to be successful and thrive in an increasingly data-rich environment.

Organisations have already started to recognize data literacy as an important skill for their workforce. A recentMcKinsey studyfound that 84% of executive leaders when increasing their talent pool of data specialists experienced more success from upskilling their existing workforce, compared to just 16% who succeeded when hiring externally.By providing analytics solutions that upskill information workers into data-literate knowledge workers, these knowledge workers individually and collectively can help drive organisational transformation. Employees have the context of the business questions to solve as well as the knowledge of the data assets available that can drive answers through analytics.

Creating a culture of upskilling is by no means an easy feat. Getting employees engaged can be half the battle. It requires building a new culture where data is accessible to workers throughout the organisation, as well as significant investment in new tools and platforms that do not require users to know complex coding languages. Low code and no code solutions provide space for employees who want to upskill, learn and practice to become skilled data workers themselves.

By implementing formal upskilling programmes that focus on key skills and technologies, in addition to providing a learning curriculum that can result in valuable and credible certifications, companies can set themselves and their employees up for success. However, these programmes should not be dry and academic. In fact, the upskilling journey can be a social experience.

For instance, businesses can host lunch and learn activities and company-wide data challenges that bring people together from across the organisation, introduce staff to data science and make it appealing and accessible. Gamification strategies can also encourage staff to use online learning resources and develop their data skills by using leader boards, points scoring and creating personal challenges and achievements.

The aim is to create an open culture of learning where staff communicate and work together to solve data problems. A companys existing data scientists should act as coaches to colleagues, encouraging them to think analytically and ask the right questions of datasets. This will help build data skills into every team, so that data analytics becomes an enterprise-wide initiative, rather than siloed into one team of analytics professionals.

The other benefit of this more social approach to data science is how it can impact diversity. Simply put, data science has a diversity problem: as few as 15% of data scientists are women. This lack of diversity is a huge concern, because with a diverse range of approaches and points of view to tackle data challenges and ensure data models and algorithms are free from biases, businesses will see improvement in results. Its no secret that the more diverse the workforce the richer the business outcomes will be, research by McKinsey has shown that organisations with more ethnic and gender diversity are more likely to outperform. When we value our varied experiences, they impact how we solve problems to get to better answers.

The evolving landscape of the data science and analytics market creates an inherent need for organisations to foster and grow data analytics cultures fuelled by collaboration and diversity, presenting an opportunity for all demographics traditionally underrepresented in the technology workforce, to accelerate their careers by embracing analytic roles. For business leaders, this represents an opportunity to look within for specialists with the right attitude to problem solving, not just technical aptitude, to support and upskill in both data literacy and analytics.

By investing in upskilling, people from any age, gender and background can learn vital data skills and progress their careers. It also enables companies to recruit new individuals who dont necessarily have an academic background or specific coding skills, which may encourage a more diverse range of applicants. This was the experience of the sports and fitness apparel company Gymshark, which uses Alteryx to empower and upskill its employees.

Weve been able to expand faster because we are able to find these individuals easier, rather than having to find people with very specific skillsets, says Gemma Hulbert, CDO at Gymshark. New hires are now able to come in and hit the ground running right away with Alteryx, even though they arent data analysts. We are able to create apps that empower our employees to be able to learn new skills using the platform.

Data science doesnt have to be the preserve of the elite few. Anyone in the workforce with a passion for solving data puzzles is now able to do it, not just a handful of specialists. In the past, employees with vast expertise in their own fields were locked out of data analytics due to the technical knowledge it required.

With the right tools and investment, anyone can learn data skills, and when people are encouraged to be creative and think critically, they are able to ask the right questions and solve all sorts of problems. Thanks to self-service platforms and automation, the power of analytics is no longer restricted to a few gatekeepers, but rather it is available to all. By enabling employees to scale their passion for data science, businesses will accelerate the knowledge workers journey to become data-driven, be better able to unlock data-driven insights and tackle the worlds biggest problems with a successful digital transformation journey.

Original post:

Businesses need to show data science isnt dull, it can be fun and rewarding - ComputerWeekly.com

Read More..

Top Data Science Funding’s and Investment to Watch Out in Q2 2021 – Analytics Insight

Data and analytics are being used every day in businesses to drive transformation and efficiency and generate accurate insights for greater revenue. The impact of data science reaches far and beyond the IT industry and is solving some of the most pressing issues in other industries. In healthcare, defense, and education, data science technologies have to revolutionize traditional business operations.

This article provides a list of the top data science companies funding and investments to look out for in Q2 of 2021.

Amount Raised: US$15M

Transaction Type: Series A

Key Investor(s): Menlo Ventures, Amity Ventures, and others

Edge Delta is a stream processing platform for observability, predicting, and detecting anomalies in operational and security data. The company allows enterprises to use a network of analytics to identify and remediate potential DevOps, IT, operational, and security incidents more accurately.

Amount Raised: US$140M

Transaction Type: Series B

Key Investor(s): Softbank Vision Fund, 5square, and others

Vianai provides artificial intelligence solutions to its clients. The company aims in defining, maintaining, and delivering different software for industry leaders. It envisions empowering millions of its clients to build machine learning applications and solutions to reach new heights.

Amount Raised: US$11M

Transaction Type: Series A

Key Investor(s): ATX Venture Partners, Circle K Ventures

Pensa Systems is a provider of autonomous perception systems for retail inventory visibility. The company has created a platform that allows drones to monitor the shelves and alert the retailers in real-time when the product is out of stock or reloaded.

Amount Raised: US$3.4M

Transaction Type: Seed

Key Investor(s): Seraphim Capital, Creative Ventures, and others

PlanetWatchers provide SaaS solutions for enterprises, governments, and NGOs to monitor their natural assets across the world. Their advanced geospatial technology combines machine learning algorithms, cloud infrastructure, and multi-source satellite sensors to provide critical information for efficient management.

Continue reading here:

Top Data Science Funding's and Investment to Watch Out in Q2 2021 - Analytics Insight

Read More..

Hot topics and emerging trends in data science – Information Age

We gauged the perspectives of experts in data science, asking them about the biggest emerging trends in data science

What does the near future of data science entail?

As one of the fastest evolving areas of tech, data science has seen a rise up the corporate agenda as less and less leaders base business decisions on guess work. With added capabilities such as artificial intelligence (AI) and the edge complementing the work of data scientists, the field is becoming more accessible to employees, but this still requires training of data skills, on the most part. In this article, we explore some key emerging trends in data science, as believed by experts in the field.

Firstly, its believed that the involvement of AI and machine learning (ML) will increase further, and enable more industries to become truly data-centric.

As businesses start to see the benefits of artificial intelligence and machine learning enabled platforms, they will invest in these technologies further, said Douggie Melville-Clarke, head of data science at Duco.

In fact, the Duco State of Reconciliation report which surveyed 300 heads of global reconciliation utilities, including chief operating officers, heads of financial control and heads of finance transformation found that 42% of those surveyed will investigate the use of more machine learning in 2021 for the purposes of intelligent data automation.

Data science in insurance

Melville-Clarke went on to cite the insurance industry, often perceived as a sector thats had difficulty innovating due to high levels of regulation, as an example for future success when it comes to data science.

He explained: The insurance industry, for example, has already embraced automation for processes such as underwriting and quote generation. But the more valuable use of artificial intelligence and machine learning is to increase your service and market share through uses like constrained customisation.

Personalisation is one of the key ways that banks and insurance companies can differentiate themselves, but without machine learning this can be a lengthy and expensive process.

Machine learning can help these industries tailor their products to meet the individual consumers needs in a much more cost-effective way, bettering the customer experience and increasing customisation.

Johanna Von Geyr, partner and EMEA lead banking, financial services & insurance at ISG, explores digital transformation in the insurance sector. Read here

Along with rising use of AI and ML models, organisations have been combining AI with robotic process automation (RPA), to reduce operational costs through automating decision making. This trend, known as hyperautomation, is predicted to help companies to continue innovating fast in a post-COVID environment in the next few years.

In many ways, this isnt a new concept the key goal of enterprise investment in data science for the past decade has been to automate decision-making processes based on AI and ML, explained Rich Pugh, co-founder and chief data scientist at Mango Solutions, an Ascent company.

What is new here is that hyperautomation is underpinned by an RPA-first approach that can turbocharge process automation and drive increased collaboration across analytic and IT functions.

Business leaders need to focus on how to harness enterprise automation and continuous intelligence to elevate the customer experience. Whether that is embedding intelligent thinking into the processes that will drive more informed decision making, such as deploying automation around pricing decisions to deliver a more efficient and personalised service, or leveraging richer real-time customer insights in conjunction with automation to execute highly relevant offers and new services at speed.

Embarking on the hyperautomation journey begins with achieving some realistic and measurable future outcomes. Specifically, this should include aiming for high-value processes, focusing on automation and change, and initiating a structure to gather the data that will enable future success.

Dan Sommer, senior director at Qlik, identified software-as-a-service (SaaS) and a self-service approach among users, along with a shift in advanced analytics, as a notable emerging trend in data science.

To those in the industry, its clear that SaaS will be everyones new best friend with a greater migration of databases and applications from on premise to cloud environments, said Sommer.

Cloud computing has helped many businesses, organisations, and schools to keep the lights on in virtual environments and were now going to see an enhanced focus on SaaS as hybrid operations look set to remain.

In addition, well see self-service evolving to self-sufficiency when it comes to effectively using data and analytics. Empowering users to access data, insights and business logic earlier and more intuitively will enable the move from visualisation self-service to data self-sufficiency in the near future.

Finally, advanced analytics need to look different. In uncertain times, we can no longer count on backward-looking data to build a comprehensive model of the future. Instead, we need to give particular focus to, rather than exclude outliers and this will define how we tackle threats going forward too.

Jonathan Bowl, AVP & general manager, UK, Ireland & Nordics at Commvault, explores the value of SaaS offerings in a post-COVID business environment. Read here

With employees gradually becoming more comfortable with using data science tools to make decisions, while aided by automation and machine intelligence, a concept thats materialised as a hot topic for the next stage of development is the concept of data fabric.

Trevor Morgan, product manager at comforte AG, explained: A data fabric is more of an architectural overlay on top of massive enterprise data ecosystems. The data fabric unifies disparate data sources and streams across many different topologies (both on-premise and in the cloud), and provides multiple ways of accessing and working with that data for organisational personnel, and with the larger fabric as a contextual backdrop.

For large enterprises that are moving with hyper-agility while working with multiple or many Big Data environments, data fabric technology will provide the means to harness all this information and make it workable throughout the enterprise.

Another important trend to consider regarding the future of data science is the new career paths and jobs that are set to emerge in the coming years.

According to the World Economic Forum (WEF)s Future of Jobs Report 2020, 94% of UK employers plan to hire new permanent staff with skills relevant to new technologies and expect existing employees to pick up new skills on the job, said Anthony Tattersall, vice-president, enterprise, EMEA at Coursera.

Whats more, WEFs top emerging jobs in the UK data scientists, AI and machine learning specialists, big data and Internet of Things all call for skills of this nature.

We therefore envision access to a variety of job-relevant credentials, including a path to entry-level digital jobs, will be key to reskilling at scale and accelerating economic recovery in the years ahead.

The Industrial Data Scientist

In regards to new roles to emerge in data science, Adi Pendyala, senior director at Aspen Technology, predicts the emergence of the Industrial Data Scientist: These scientists will be a new breed of tech-driven, data-empowered domain experts with access to more industrial data than ever before, as well as the accessible AI/ML and analytics tools needed to translate that information into actionable intelligence across the enterprise.

Industrial data scientists will represent a new kind of crossroads between our traditional understanding of citizen data scientists and industrial domain experts: workers who possess the domain expertise of the latter but are increasingly shifting over to the data realm occupied by the former.

To kick off our Data Science month, this article will explore how you can embark on a career in data science, and the key factors to consider. Read here

New tools

Many organisations are being impacted by a shortage of data scientists in proportion to demand, but Julien Alteirac, regional vice-president, UK&I at Snowflake, believes that new tools, powered by ML, could help to mitigate this skills gap in the near future.

When it comes to analysing data, most organisations employ an abundance of data analysts and a limited number of data scientists, due in large part to the limited supply and high costs associated with data scientists, said Alteirac.

Since analysts lack the data science expertise required to build ML models, data scientists have become a potential bottleneck for broadening the use of ML. However, new and improved ML tools which are more user-friendly are helping organisations realise the power of data science.

Data analysts are empowered with access to powerful models without needing to manually build them. Specifically, automated machine learning (AutoML) and AI services via APIs are removing the need to manually prepare data and then build and train models. AutoML tools and AI services lower the barrier to entry for ML, so almost anyone will now be able to access and use data science without requiring an academic background.

Read this article:

Hot topics and emerging trends in data science - Information Age

Read More..

Training the Next Generation of Indigenous Data Scientists – The New York Times

Native DNA is so sought after that people are looking for proxy data, and one of the big proxy data is the microbiome Mr. Yracheta said. If youre a Native person, you have to consider all these variables if you want to protect your people and your culture.

In a presentation at the conference, Joslynn Lee, a member of the Navajo, Laguna Pueblo and Acoma Pueblo Nations and a biochemist at Fort Lewis College in Durango, Colo., spoke about her experience tracking the changes in microbial communities in rivers that experienced a mine wastewater spill in Silverton, Colo. Dr. Lee also offered practical tips on how to plan a microbiome analysis, from collecting a sample to processing it.

In a data-science career panel, Rebecca Pollet, a biochemist and a member of the Cherokee Nation, noted how many mainstream pharmaceutical drugs were developed based on the traditional knowledge and plant medicine of Native people. The anti-malarial drug quinine, for example, was developed from the bark of a species of Cinchona trees, which the Quechua people historically used as medicine. Dr. Pollet, who studies the effects of pharmaceutical drugs and traditional food on the gut microbiome, asked: How do we honor that traditional knowledge and make up for whats been covered up?

One participant, the Lakota elder Les Ducheneaux, added that he believed that medicine derived from traditional knowledge wrongly removed the prayers and rituals that would traditionally accompany the treatment, rendering the medicine less effective. You constantly have to weigh the scientific part of medicine with the cultural and spiritual part of what youre doing, he said.

Over the course of the IndigiData conference, participants also discussed ways to take charge of their own data to serve their communities.

Mason Grimshaw, a data scientist and a board member of Indigenous in A.I., talked about his research with language data on the International Wakashan A.I. Consortium. The consortium, led by an engineer, Michael Running Wolf, is developing an automatic speech recognition A.I. for Wakashan languages, a family of endangered languages spoken among several First Nations communities. The researchers believe automatic speech recognition models can preserve fluency in Wakashan languages and revitalize their use by future generations.

Original post:

Training the Next Generation of Indigenous Data Scientists - The New York Times

Read More..

New High Profile Addition to Maritime Optimas Team, Set to Build the Companys Analysis/Data Science Unit – Hellenic Shipping News Worldwide

Sven Melsom Ziegler ex Clarkson Platou will head and build Maritime Optimas analysis/data science unit. Sven is a well-known and experienced Shipping/Offshore Market Analyst.

I am looking forward to working with the team and the data in Maritime Optima, Mr Melsom Ziegler says. I was introduced to the team and their data platform during the spring. It is rare to find a startup investing so much in maritime data quality and seeing such a competent team with so much passion and willingness to improve data quality continuously. Since the industry lacks clear definitions of most data, someone must start doing this thoroughly because machines need clear definitions to create value for humans. Otherwise, the gains from digitalization and automation will be hard to obtain. Here we have a unique data platform that enables state-of-the-art shipping/offshore models and analysis.

Sven Melsom Ziegler started working with RS Platou, where he stayed for 21 years. Sven was born in Cape Town and raised in Larvik and Athens. After completing his education at Strathclyde University of Glasgow and CASS in London, he later settled down in Oslo. He comes from Forum Market Services, where he has been working with mainly oil market/bunker analysis and quantitative strategies.

We are very proud to have Sven on board, says the founder and CEO Kristin Omholt-Jensen. We have kept a very low profile so far. During the Covid months, we have spent time keeping the team well, together and motivated, investing in and collecting data, and defining data templates, testing and scaling. It is super important for us to develop the product together with our users, so since our R&D partners left their offices and went to their home offices while we needed active feedback from real users, we launched a freemium application last autumn. Since the launch, we have been growing very quickly. Today we have close to 9000 active registered users, and we pick up AIS data every minute for more than 65 000 vessels from more than 600 satellites and terrestrial senders. We also know we manage to do cost-effective scaling. We have a very exciting road-map and will develop new products / features continuously (based on feedback from the users) and it is perfect timing to have Sven as a part of our team.

Maritime Optima is set up to develop and distribute user-friendly and flexible maritime saas software across platforms helping professionals in the maritime industry save time, work more efficiently, make better decisions and maybe have more fun. Software should be easy to understand and looked upon as a partner and not something you have to use and hate. We believe that colleagues should be working in teams, but you can also work in your one-man-team if you want. We think professionals will dare to share more and more, but they will not want to share everything with everyone. So we have made it super-easy to share public or keep private, and we have therefore included a user log showing the users and teams activity, so it is easy to find later.

The maritime office software industry is young, and there are many startups. To change the way the industry works, their routines, and how things have been done for years will take time, but we are prepared and willing to show that we are here to stay. We continuously launch new features based on user feedback, invest time to improve the data quality and increase the number of users. The maritime software industry is still young and will be consolidated during the next few years, and we want to play an active role in that consolidation, says Omholt-Jensen.

Start building your own maritime office by registering a free account in Maritime Optima: http://www.app.maritimeoptima.comSource: Maritime Optima

Read more from the original source:

New High Profile Addition to Maritime Optimas Team, Set to Build the Companys Analysis/Data Science Unit - Hellenic Shipping News Worldwide

Read More..

How to empower the data scientist in the era of edge computing and AI – Information Age

Dan Warner, CEO and co-founder of LGN, discusses how data scientists can be empowered in the era of edge computing and AI

With data constantly evolving, the scientists managing this cannot succeed alone.

For a while now, the position of data scientist has been one of the most hyped roles in technology and, indeed, business. Its not hard to see why as organisations wake up to the seemingly limitless potential in their data, theyve realised they need people that can extract, analyse and interpret large amounts of data. The demand is such that there is ongoing talk of a data scientist shortage, particularly in more experienced, senior roles.

Yet for all this attention, how effective are those data scientists, and how empowered do they actually feel? Its a pertinent question, coming at a time when so much data is underutilised. Are businesses, knowing they need to make better use of their data, hiring data scientists without fully understanding how best to deploy the talent?

Perhaps a better way to look at it is to ask whether businesses know how to make better use of their data are they hiring data scientists and expecting them to work miracles, or are businesses ensuring that not only do they have the right talent, but that they are feeding these teams with the right data?

To kick off our Data Science month, this article will explore how you can embark on a career in data science, and the key factors to consider. Read here

Many might think that its the job of the data scientist to find the right data, but theyre wrong. Ultimately, data scientists can only work with what theyre given, in the same way that a salesperson can only do so much with a poor product, or a Formula One driver can only achieve so much with an average car.

What, then, is the right data? Obviously, that varies from business to business, but fundamentally there are a number of principles that good data will follow, irrespective of organisational need. Firstly, it needs to be fresh that means it needs to reflect the real world as it is at that moment. Everything changes so fast that a lot of data quickly becomes irrelevant. The more it stagnates, the less value it has.

So, if a data scientist is working on old data when there is more recent information available, the insights they can extract are going to be less relevant to the environment the business is operating in.

Secondly, it needs to be live data so it needs to be from the real world, not training data, and not made up. Why? Because the real world is messy, throwing up anomalies that no one would ever have thought of, creating obstacles that models and indeed data scientists brought up on sanitised training data wont be able to process.

Put another way if an organisation feeds its data scientists and their models stale, offline data, then the best that enterprise can hope for is irrelevant, limited insights.

That means businesses need to find a way of continually feeding their data scientists with live, evolutionary data, in real-time, from the real world. How do they do that? With edge computing.

Edge computing needs no introduction with the explosion in Internet of Things devices over the last few years, more and more data processing is happening at the edge of networks. Sensors on everything from wind turbines and tractors to fridges and streetlamps are capturing data constantly. Its real, its live, its messy, and it is exactly what data scientists need to be working on.

Businesses need to empower their data scientists by giving them training data and performance metrics from the edge. They can then use this to inform their AI models, which in turn are then deployed onto edge devices. These real-world environments give data scientists vital information on how their models stand up to anomalies and variations that cant be recreated in labs or test environments. The models could well perform badly, at least initially thats a good thing, as it gives data scientists something to dig into, to understand whats come up that they hadnt thought of.

That said, whether the models perform well or poorly, data needs to be accessed, cleaned, annotated and ultimately fed back into the model for training on a continual basis. Its a feedback loop that needs to keep running so that systems can improve and adapt. But it needs to be a smart extraction of data no system can possibly manage all the data sensors are collecting, so having a way of identifying and getting the most important data back from the edge is critical.

On top of that, data scientists need to be able to redeploy sensors and machines to investigate, re-image and analyse data sources confusing the AI models. Whichever way the data has been gathered, however automated the process, at some point it was subject to human thinking, assumptions and presumptions. These may have been based on the data and evidence available at the time, but that may no longer be appropriate to capture the data needed. This is where being able to shift what data is being collected is vital for data scientists to remain effective, working on the most relevant information.

As digital innovation continues to accelerate, we explore how machine learning models can be trained to be future-ready. Read here

Ultimately, this all signals a shift away from the old paradigm of collecting big sets of training data, segmenting, training the model and seeing what happens, and towards a new paradigm one of active learning, where AI models learn how to cope with the real world, and data scientists are empowered to work effectively. In doing so, they will be better equipped to gather the insights and intelligence needed to give their organisations a true competitive edge in increasingly crowded, data-driven marketplaces.

Originally posted here:

How to empower the data scientist in the era of edge computing and AI - Information Age

Read More..

Don’t Forget the Human Factor in Autonomous Systems and AI Development – Datanami

(Jozsef Bagota/Shutterstock)

It goes without saying that humans are the intended beneficiaries of the AI applications and autonomous systems that data scientists and developers are creating. But whats the best way to design these AI apps and autonomous systems to maximize human interaction and human benefit? Thats a tougher question to answer. Its also the focus of human factors specialists, who are increasingly in demand.

Datanami recently caught up with one of these in-demand human factors specialists. Catherine Neubauer is a research psychologist at the Army Research Lab, and a lecturer at the University of Southern Californias online Master of Science in Applied Psychology Program. Neubauer, who holds a Ph.D. in Psychology with an emphasis on Human Factors from the University of Cincinnati, has researched various aspects of the human factors equation, including assessing human performance and decision-making.

According to Neubauer, there are a slew of concerns where humans and the latest technology come together.

AI and autonomous systems are really becoming very prevalent in our everyday interactions, she says. We really need to focus on them because if we dont design them with the human user in mind, that interaction is not going to be easy or desirable.

As an applied psychologist working in this field, Neubauer understands the problem from multiple angles. On the one hand, she wants to understand how humans interact with autonomous systems and AI so that humans can be better trained to work with next-gen systems. On the other hand, her work also informs data scientists and developers on how they can build better, more human-centered systems.

There is considerable room for improvement on both sides of the equation. I think were getting there, she says, but I think a lot more work is needed.

Tesla cars have a self-driving mode, but the carmaker warns users not to rely on it (Flystock/Shutterstock)

For instance, in the autonomous driving arena, where Neubauer has spent a considerable amount of time, people may feel that great progress is being made. After all, some new cars can essentially drive themselves, at least in some circumstances. But those aha experiences are not what they may appear to be, she says.

Theres this idea of Oh great, I have this self-driving car. Its a Tesla. I can just sit back and not pay attention [and] fall asleep. Thats not the case. Were not there yet, she tells Datanami in a recent interview. There are limitations to this technology. In an ideal state, yes it can it can drive around on its own. But the human should always be ready to take over control if they need to.

Similarly, advances in natural language processing (NLP) have supercharged the capabilities of personal assistants, which are able to understand and respond to ever-more-sophisticated questions and requests. But once again, the gains should not overshadow the fact that a lot more work is needed.

I think we are doing a good job in the sense that we made very large gains and what were capable of doing, she says. But I still think that you know more work needs to be done to get it to where you know you can just easily interact with a personal assistant, that its like a robot or something like that, with no mistakes, no errors. Were still seeing some kinks that need to be worked out.

Some of Neubauers latest research involves the algorithmic detection of human emotion. Computer vision technology has made great strides not only in being able to recognize specific faces, but also to detect somebodys mood based on how their face appears. Knowing if a human is happy, sad, or angry can be very valuable, and governments around the world are investing in the technology as part of their defense initiatives.

But, again, the technology is not quite there yet, Neubauer says.

The best AI products are designed with humans in mind (Aurielaki/Shutterstock)

While I think its really great that that we kind of have this classification system to read the emotion, you kind of have to take that with a grain of salt, because everyone expresses emotions differently, she says. And some people might feel really happy, but theyre just not outwardly expressive. Some people might feel really sad or depressed, but you might not see that expressed for whatever reasons.

Instead of just using the computer vision algorithm, Neubauer is investigating multi-modal forms of emotion detection. This is a promising area of research, she says. Im not going to focus specifically on a facial expression. Im going to get other streams of data to give me more information about a human, she says.

So what should data scientists and autonomous systems developers do if they want to benefit from human factors research? Number one is know your users.

I think that the best products or systems or technologies that we interact with have been designed with the human user in mind, Neubauer says. First and foremost, you have to make sure that your designing the systems for your users, to make them easy to use.

A rule of thumb with this sort of design thinking is to make the product so easy to use that it doesnt require a manual. This often requires limiting the ways in which a user can interact with an application or a system, and to encourage exploration. (There is a limit to this rule, of course after all, Tesla tells users in the manual to always be ready to take over controls, but many people obviously ignore this.)

Neubauers second piece of advice for data scientist and autonomous systems developers who wan to incorporate human factors advances into their work, interestingly, concerns ethics.

I like to think of myself as an ethical person, and I am always thinking of where my research and my work is going, and whos going to be using it, she says. Just because we can do something with technology doesnt mean we should. So anytime were implementing this technology, building new systems, we have to ask ourselves, is it actually helping society? And who is it helping?

Catherine Neubauer, PhD, is a research psychologist at the Army Research Lab and a lecturer in human factors at USC

Not all humans are good at assessing risk. Its not necessarily a qualification that data scientists will look to build out, or to put on their resume. But in Neubauers reading, risk assessment should be part of the creative process for those creating AI apps and autonomous systems, particularly when it comes to the risks that they are asking their users to take.

The risks of a bad outcome are significantly higher when AI and autonomous features are built into self-driving cars, autopilot systems in airplanes, and traffic control systems for trains, for example, than they are in developing a personal assistant or adding probabilistic features to a word processor program (Clippy, were looking at you).

If its some sort of stressful, high-stakes scenario and I have an autonomous agent working with me and it [tells] me to go left when I should go right, because thats the data that it had trained its decision on, thats going to be a problem, Neubauer says. On the other hand, maybe youre a surgeon going into surgery. You want to make sure your personal assistant is letting you know what your appointments are. So I think it depends on the scenario that youre in and how important it is to make sure that we have a very small, if not non-existent, percentage or likelihood that an errors going to occur.

It appears that were at the beginning of a period of great progress in the field of AI and autonomous systems. There are many aspects of life and society that can benefit from a certain degree of data-driven decision making.

But in the long run, there are other intangible aspects to the human factors equation that should bear some scrutiny. Neubauer understands that AI and autonomous systems can reduce our cognitive workload and let us get more done. But she also wonders how the ever-growing use of this technology will impact human development.

Sometimes I get concerned that we basically have these personal assistants in our phone reminding us to do everything, she says. We dont have to memorize phone numbers anymore. What is actually going to happen to our cognitive system if we have GPS taking us everywhere? We dont have to actually develop a mental map of the cities we live in. Those kinds of basic skills worry me that theyre not being used. And if theyre not being used there, were not going to be strong in those areas.

Related Items:

Why Human Integration Is Critical to the Future of AI

The Next Generation of AI: Explainable AI

User Autonomy and the Pending Data War

More here:

Don't Forget the Human Factor in Autonomous Systems and AI Development - Datanami

Read More..

Humanities versus STEM: The forced dichotomy where no one wins – IT PRO

Last autumn, following a difficult six months of redundancies and furloughs that hit the arts and culture sector particularly hard, a governmental campaign encouraging people to retrain to work in the tech industry was met with heavy criticism from the public. The ad showed a young dancer named Fatima tying up her ballet shoes, likely in preparation for rehearsal, with the message Fatimas next job could be in cyber. (she just doesnt know it yet) overlaid on the left of the image. In the end, the campaign became so unpopular (not to mention thoroughly ridiculed) that the campaign was ultimately scrapped and the government forced to apologise.

Although it was probably not the main intention by design, the ad became a symbol of the way the arts and culture sectors are seen as frivolous, held in less respect than the supposedly more responsible and important STEM subjects. Last year, arts and design, which could well have been the course studied by the person hired by the government to create the infamous campaign, was ranked ninth out of the ten most popular university subjects, according to QS World University Rankings by Subject. Computer science and information systems and engineering and technology topped the list, while humanities subjects such as history, languages, literature, and philosophy, were nowhere to be seen. This shouldn't come as a surprise: Many young people considering studying Arts & Humanities are advised not to pursue this path, as its stereotypically seen as a fast track to unemployment or redundancy, as seen during the pandemic. In fact, if you imagine a hypothetical usefulness spectrum dictated by the economy, humanities are going to be on the opposite end from subjects such as maths, computer science, and chemistry.

Long-time NASA engineer Peter Scott uses a metaphor of driving a car to illustrate the divide between those who are in the tech industry and those who arent.

The fact is that it's a matter of perspective, and this perspective came to me after driving my children everywhere. They have to sit in the back seat all the time and when one of them gets old enough, and they get to ride in the front seat once in a while, it's like: Oh, you can see so much more here.

In a world that is rapidly progressing with new technologies, being outside of STEM is a bit like being driven around in a car while being forced to sit in the back seat.

There are insiders and outsiders in every industry, but the tech industry is the one that's doing the most to reshape where we're going, he says. For everyone else, it's like: We don't know where we're going. [The driver] seems to think you know where you're going, but you're not giving us a good enough picture of it.

After three decades at NASA's Jet Propulsion Laboratory (JPL), Scott decided to embark on a more human-facing career of public speaking, which he describes as scarier and less likely to happen than jumping out of a plane. However, he also notes that he managed to achieve both. Nowadays, he blends being a business coach and successful TEDx speaker with contracting for NASA.

I'm balancing both of these worlds because I want and need to be able to see both sides at the same time. Engineers and scientists, we tend to get locked into a certain view of the world that's driven by the equations and the principles that we know and the natural laws that explain everything. And you can't say that it doesn't, because that's the principle of science, he says.

However, while science may be the vanguard of change, if taken in isolation it leaves out a whole perspective that's driven by poetry, emotion, and artistic values, notes Scott.

Because they're not quantifiable and measurable, they get not so much disdained as just ignored by scientists and tech people. It doesn't get you where you need to go as a tech person, so you can't afford to spend time on that. So these worlds are like C P Snows Two Cultures theyre growing, if anything, further apart, he warns.

The gap between the two separate cultures of humanities and STEM is especially visible in the evolving technology of artificial intelligence, which is becoming more present in everyday life in our phones, workplaces, and even supermarkets.

Our journey with AI especially is one that requires a common understanding, says Scott. We can't advance this technological agenda that upends everyone's life in a largely, and hopefully, positive way without understanding both sides, without finding the bridge between those two cultures.

Although AI is treated as inherently technological, last years events have proven that its also a major ethical issue. Nevertheless, despite University of Oxford physicist David Deutsch predicting in 2012 that philosophy will be the key that unlocks artificial intelligence, the New College of the Humanities (NCH) is so far the only university in the UK to offer a joint degree in philosophy and artificial intelligence. Dr Brian Ball, who is the head of NCHs philosophy faculty and an associate professor, tells IT Pro that the degree was launched after the school partnered with the Boston-based Northeastern University.

We are quite proud of our MA in Philosophy and Artificial Intelligence, and some of our related degrees, such as our MSc AI with a Human Face, and our various bachelor's degrees with humanities majors and data science minors, he says. They are prompted in part by our joining the global network of Northeastern University, where interdisciplinarity and the cultivation of human, data, and technological literacies are central to higher education, and partly by the intrinsic merits of studying these subjects together.

The IT Pro Podcast: Soft skills vs STEM skills

The UKs level of soft skills is dropping - so what can we do about it?

According to Ball, AI can benefit from at least two of these merits, with philosophy able to provide the technology with ethicality as well as explainability. This is particularly crucial at a time when facial recognition is increasingly under fire for being prone to unethical usage.

Artificial intelligence isnt the only field where philosophy might be useful, however. Over the last five years, Exasol chief data and analytics officer Peter Jackson recruited plenty of data scientists, but not all of them come from a traditional IT or data science background. In fact, he says the best data scientist to ever be a part of one of his teams, who was creative, curious, and could turn insights into compelling arguments, didn't hold a degree in computing or data science, but in philosophy. Jackson says that, when recruiting, he doesnt only look at candidates technical skills and their ability to understand data, but also their storytelling skills. According to him, this specific ability can be found in somebody who's done English literature, who is very good at writing poems and stories, and building a coherent argument.

I need them to be able to interpret the output of that piece of work, either to me, or the rest of the team, or to stakeholders, he tells IT Pro. If they can only go so far, and they have to hand [it] over to somebody else to tell the story, you can get a disconnect. The person who's telling the story might not be able to answer some of the technical questions that may arise: What training set did you use? or Where did that data come from?. So I try to recruit data scientists who are able to at least tell the first part of the story of their work.

However, Jackson notes that finding people with both data science and storytelling skills is very hard.

Sometimes, because of particular skills that you need from the data science point of view, you quite often compromise on that. And that's where you do need the professional storytellers, professional writers who can support it, he adds.

Asked about his thoughts on the split between STEM sciences and humanities, Jackson says that he doesnt see it as a dichotomy.

I dont think there should be a divide. I think as a society, as an economy, we need smart, educated people and I think that is the priority.

Owning your own access security

The key to building strong cloud security and avoiding the risk of vendor lock-in

Developing a dynamic infrastructure

How to implement holistic changes to support distributed workers

How upgraded server and storage platforms support digital transformation

New Dell EMC PowerStore delivers high-end enterprise storage features at midrange price point

How to maximise the value of your data and apps with IaaS

Free yourself from infrastructure complexity

See the original post:

Humanities versus STEM: The forced dichotomy where no one wins - IT PRO

Read More..