Page 3,065«..1020..3,0643,0653,0663,067..3,0703,080..»

Working at the intersection of data science and public policy | Penn Today – Penn Today

One of the ideas you discuss in the book is algorithmic fairness. Could you explain this concept and its importance in the context of public policy analytics?

Structural inequality and racism is the foundation of American governance and planning. Race and class dictate who gets access to resources; they define where one lives, where children go to school, access to health care, upward mobility, and beyond.

If resource allocation has historically been driven by inequality, why should we assume that a fancy new algorithm will be any different? This theme is present throughout the book. Those reading for context get several in-depth anecdotes about how inequality is baked into government data. Those reading to learn the code, get new methods for opening the algorithmic black box, testing whether a solution further exasperates disparate impact across race and class.

In the end, I develop a framework called algorithmic governance, helping policymakers and community stakeholders understand how to tradeoff algorithmic utility with fairness.

From your perspective, what are the biggest challenges in integrating tools from data science with traditional planning practices?

Planning students learn a lot about policy but very little about program design and service delivery. Once a legislature passes a $50 million line item to further a policy, it is up to a government agency to develop a program that can intervene with the affected population, allocating that $50 million in $500, $1,000 or $5,000 increments.

As I show in the book, data science combined with governments vast administrative data is good at identifying at-risk populations. But doing so is meaningless unless a well-designed program is in place to deliver services. Thus, the biggest challenge is not teaching planners how to code data science but how to consider algorithms more broadly in the context of service delivery. The book provides a framework for this by comparing an algorithmic approach to service delivery to the business-as-usual approach.

Has COVID-19 changed the way that governments think about data science? If so, how?

Absolutelyspeaking of service delivery, data science can help governments allocate limited resources. The COVID-19 pandemic is marked entirely by limited resources: From testing, PPE, and vaccines to toilet paper, home exercise equipment, and blow-up pools (the latter was a serious issue for my 7-year-old this past summer).

Government failed at planning for the allocation of testing, PPE, and vaccines. We learned that it is not enough for government to invest in a vaccine; it must also plan for how to allocate vaccines equitably to populations at greatest risk. This is exactly what we teach in Penns MUSA Program, and I was disappointed at how governments at all levels failed to ensure that the limited supply of vaccine aligned with demand.

We see this supply/demand mismatch show up time and again in government, from disaster response to the provision of health and human services. I truly believe that data can unlock new value here, but, again, if government is uninterested in thinking critically about service delivery and logistics, then the data is merely a sideshow.

What do you hope people gain by reading this book?

There is no equivalent book currently on the market. If you are an aspiring social data scientist, this book will teach you how to code spatial analysis, data visualization, and machine learning in R, a statistical programming language. It will help you build solutions to address some of todays most complex problems.

If you are a policymaker looking to adopt data and algorithms into government, this book provides a framework for developing powerful algorithmic planning tools, while also ensuring that they will not disenfranchise certain protected classes and neighborhoods.

See the original post here:

Working at the intersection of data science and public policy | Penn Today - Penn Today

Read More..

Jupyter has revolutionized data science, and it started with a chance meeting between two students – TechRepublic

Commentary: Jupyter makes it easy for data scientists to collaborate, and the open source project's history reflects this kind of communal effort.

Image: iStockphoto/shironosov

If you want to do data science, you're going to have to become familiar with Jupyter. It's a hugely popular open source project that is best known for Jupyter Notebooks, a web application that allows data scientists to create and share documents that contain live code, equations, visualizations and narrative text. This proves to be a great way to extract data with code and collaborate with other data scientists, and has seen Jupyter boom from roughly 200,000 Notebooks in use in 2015 to millions today.

Jupyter is a big deal, heavily used at companies as varied as Google and Bloomberg, but it didn't start that way. It started with a friendship. Fernando Prez and Brian Granger met the first day they started graduate school at University of Colorado Boulder. Years later in 2004, they discussed the idea of creating a web-based notebook interface for IPython, which Prez had started in 2001. This became Jupyter, but even then, they had no idea how much of an impact it would have within academia and beyond. All they cared about was "putting it to immediate use with our students in doing computational physics," as Granger noted.

Today Prez is a professor at University of California, Berkeley, and Granger is a principal at AWS, but in 2004 Prez was a postdoctoral student in Applied Math at UC Boulder, and Granger was a new professor in the Physics Department at Santa Clara University. As mentioned, they first met as students in 1996, and both had been busy in the interim. Perhaps most pertinently to the rise of Jupyter, in 2001 Prez started dabbling in Python and, in what he calls a "thesis procrastination project," he wrote the first IPython over a six-week stretch: a 259-line script now available on GitHub ("Interactive execution with automatic history, tries to mimic Mathematica's prompt system").

SEE:Top 5 programming languages for data scientist to learn (free PDF)(TechRepublic)

It would be tempting to assume this led to Prez starting Jupyter--it would also be incorrect. The same counterfactual leap could occur if we remember that Granger wrote the code for the actual IPython Notebook server and user interface in 2011. This was important, too, but Jupyter wasn't a brilliant act by any one person. It was a collaborative, truly open source effort that perhaps centered on Prez and Granger, but also people like Min Ragan-Kelley, one of Granger's undergraduate students in 2005, who went on to lead development of IPython Parallel, which was deeply influential in the IPython kernel architecture used to create the IPython Notebook.

However we organize the varied people who contributed to the origin of Jupyter, it's hard to get away from "that one conversation."

In 2004 Prez visited Granger in the San Francisco Bay Area. The old friends stayed up late discussing open source and interactive computing, and the idea to build a web-based notebook came into focus as an extension of some parallel computing work Granger had been doing in Python, as well as Prez's work on IPython. According to Granger, they half-jokingly talked about these ideas having the potential to "take over the world," but at that point their idea of "the world" was somewhat narrowly defined as scientific computing within a mostly academic context.

Years (and a great deal of activity) later, in 2009, Prez was back in California, this time visiting Granger and his family at their home in San Luis Obispo, where Granger was now a professor. It was spring break, and the two spent March 21-24 collaborating in person to complete the first prototype IPython kernel with tab completion, asynchronous output and support for multiple clients.

By 2014, after a great deal of collaboration between the two and many others, Prez, Granger and the other IPython developers co-founded Project Jupyter and rebranded the IPython Notebook as the Jupyter Notebook to better reflect the project's expansion outwards from Python to a range of other languages including R and Julia. Prez and Granger continue to co-direct Jupyter today.

"What we really couldn't have foreseen is that the rest of the world would wake up to the value of data science and machine learning," Granger stressed. It wasn't until 2014 or so, he went on, that they "woke up" and found themselves in the "middle of this new explosion of data science and machine learning." They just wanted something they could use with their students. They got that, but in the process they also helped to foster a revolution in data science.

How? Or, rather, why was it that Jupyter has helped to unleash so much progress in data science? Rick Lamers explained:

Jupyter Notebooks are great for hiding complexity by allowing you to interactively run high level code in a contextual environment, centered around the specific task you are trying to solve in the notebook. By ever increasing levels of abstraction data scientists become more productive, being able to do more in less time. When the cost of trying something is reduced to almost zero, you automatically become more experimental, leading to better results that are difficult to achieve otherwise.

Data science is...science; therefore, anything that helps data scientists to iterate and explore more, be it elastic infrastructure or Jupyter Notebooks, can foster progress. Through Jupyter, that progress is happening across the industry in areas like data cleaning and transformation, numerical simulation, exploratory data analysis, data visualization, statistical modeling, machine learning and deep learning. It's amazing how much has come from a chance encounter in a doctoral program back in 1996.

Disclosure: I work for AWS, but the views expressed herein are mine.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

See original here:

Jupyter has revolutionized data science, and it started with a chance meeting between two students - TechRepublic

Read More..

Gartner: AI and data science to drive investment decisions rather than "gut feel" by mid-decade – TechRepublic

Turns out, "calling it from the gut," may become a strategy of the past as data increasingly drives decision-making. But how will these data-driven approaches change investment teams?

Image: iStock/metamorworks

In the age of digital transformation, artificial intelligence and data science are allowing companies to offer new products and services. Rather than relying on human-based intuition or instincts, these capabilities provide organizations with droves of data to make more informed business decisions.

Turns out, "calling it from the gut," as the adage goes, may become an approach of the past as data increasingly drives investment decisions. A new Gartner report predicts that AI and data science to drive investment decisions rather than "gut feel" by mid-decade.

"Successful investors are purported to have a good 'gut feel'the ability to make sound financial decisions from mostly qualitative information alongside the quantitative data provided by the technology company," said Patrick Stakenas, senior research director at Gartner in a blog post. "However, this 'impossible to quantify inner voice' grown from personal experience is decreasingly playing a role in investment decision making."

Instead, AI and data analytics will inform more than three-quarters of "venture capital and early-stage investor executive reviews," according to a Gartner report published earlier this month.

"The traditional pitch experience will significantly shift by 2025, and tech CEOs will need to face investors with AI-enabled models and simulations as traditional pitch decks and financials will be insufficient," Stakenas said.

SEE:TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download(TechRepublic Premium)

Alongside data science and AI, crowdsourcing will also help play a role in "advanced risk models, capital asset pricing models and advanced simulations evaluating prospective success," per Gartner. While the company expects this data-driven approach as opposed to an intuitive approach to become the norm for investors by mid-decade, the report also notes a specific use-case highlighting using these methods.

Correlation Ventures uses information gleaned from a VC financing and outcomes database to "build a predictive data science model," according to Gartner, allowing the fund to increase both the total number of investments and the investment process timeline "compared with traditional venture investing."

"This data is increasingly being used to build sophisticated models that can better determine the viability, strategy and potential outcome of an investment in a short amount of time. Questions such as when to invest, where to invest and how much to invest are becoming almost automated," Stakenas said.

SEE:Researchers use AI-enabled drones to protect the iconic koala(TechRepublic)

A portion of the report delves into the myriad ways these shifts in investment strategy and decision making could alter the skills venture capital companies seek and transform the traditional roles of investment managers. For example, Gartner predicts that a team of investors "familiar with analytical algorithms and data analysis" will augment investment managers.

These new investorswho are "capable of running terabytes of signals through complex models to determine whether a deal is right for them"will apply this information to enhance "decision making for each investment opportunity," according to the report.

The report also includes a series of recommendations for tech CEOs to develop in the next half-decade. This includes correcting or updating quantitative metrics listed on social media platforms and company websites for accuracy. Additionally, to increase a tech CEO's "chances of making it to an in-person pitch" they should consider adapting leadership teams and ensure "online data showcases diverse management experience and unique skills," the report said.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

More:

Gartner: AI and data science to drive investment decisions rather than "gut feel" by mid-decade - TechRepublic

Read More..

Postdoctoral Position in Transient and Multi-messenger Astronomy Data Science in Greenbelt, MD for University of MD Baltimore County/CRESST II -…

Postdoctoral Position in Transient and Multi-messenger AstronomyData ScienceThe High Energy Astrophysics Science Archive Research Center (HEASARC) and Time- domain Astronomy Coordination Hub (TACH) at NASAs Goddard Space Flight Center (GSFC) invite applications for postdoctoral research positions in the fields of transient and/or multi- messenger astronomy.Applicants should have a strong astronomy research track record and also deep expertise in the technical disciplines of full-stack software development, cloud computing, and data visualization. Experience in machine learning and/or time-series databases would also be beneficial.

Successful applicants will join HEASARC/TACH and have a central role in shaping Goddards multi-messenger science output. This position is funded at 100% FTE. Approximately half of the applicants time will be devoted to HEASARC/TACH, including activities such as software engineering, shaping next-generation Kafka-based NASA astronomy alert systems, pipelinedevelopment, and collaboration with Goddard-supported missions. The remainder of the applicants time is available for self-driven research projects.

GSFC is home to over 100 Ph.D. astronomers, including project teams for Swift, Fermi, NICER, NuSTAR, TESS, JWST, and Roman, as well as ample computational resources. GSFC is also a member of the LIGO Scientific Collaboration. Through the Joint Space-Science Institute (JSI), GSFCis a partner in the Zwicky Transient Facility project. The successful applicants will also have the opportunity to apply for time on 4.3m Lowell Discovery Telescope in Happy Jack, AZ.

The positions are for two years, renewable for a third year upon mutual agreement, and will be hired through the University of Maryland, Baltimore County on the CRESST II collaborative agreement with GSFC. The nominal starting date is in Fall 2021, but alternate dates are possible depending on availability. Candidates must have a Ph.D. in astronomy, physics, or a related field by the date of appointment.

Candidates should provide a cover letter, CV (including publication list), and a 3-page statement of research interests. Short-listed candidates will be asked to supply three letters of reference at a later date. Completed applications received by Friday, April 30, 2021 will receive full consideration. All application materials and inquiries should be sent to:

Transient and Multi-messenger Astronomy Data Science Postdoctoral PositionCRESST/UMBCMail Code 660.8, NASA/GSFC Greenbelt, MD 20771, orVia e-mail to katherine.s.mckee@nasa.gov

Salary and benefits are competitive, commensurate with experience and qualifications. For more information about the proposed research, contact Dr. Judith Racusin (judith.racusin@nasa.gov). For information on CRESST II or UMBC, contact Dr. Don Engel (donengel@umbc.edu).

UMBC is an equal opportunity employer and welcomes all to apply. EOE/M/F/D/V. The TACH project and NASA/GSFC are committed to building a diverse group and encourage applications from women, racial and ethnic minorities, individuals with disabilities and veterans.

Read more:

Postdoctoral Position in Transient and Multi-messenger Astronomy Data Science in Greenbelt, MD for University of MD Baltimore County/CRESST II -...

Read More..

DefinedCrowd CEO Daniela Braga on the future of AI, training data, and women in tech – GeekWire

DefinedCrowd CEO Daniela Braga. (Drio Branco Photo)

Artificial intelligence is the fourth industrial revolution and women had better play a prominent role in its development, says Daniela Braga, CEO of Seattle startup DefinedCrowd.

We left the code era to men over the last 30 years and look at where it got us, Braga told attendees of the recent Women in Data Science global conference. Ladies lets lead the world to a better future together.

Technology has of course led to amazing advancements in health, communications, education and entertainment, but it has also created a more polarized and extremist society, spread dangerous misinformation and excluded swaths of the population from participation. A 2018 study by Element AI found that only 13% of U.S. AI researchers were women.

Braga thinks we can do better. She is a co-founder of DefinedCrowd, an AI training data technology platform that launched in December 2015. Braga took over as CEO in mid-2016. The company is ranked No. 21 on the GeekWire 200, our list of top Pacific Northwest tech startups, and has reeled in $63.6 million in venture capital, including a $50 million round raised last year.

We caught up with Braga after the conference to learn how AI is usurping coding; the need to impose ethics and regulations on where it takes us; and the need for more women in the industry and in AI leadership. Here are some key takeaways:

We spent five centuries in the print era, 30 years in software and now AI is on a path to supplant software, Braga said. And while coding and programmers drove software development, its data and data scientists that produce AI.

You dont program rules, you teach AI with data that is structured and [theres] a lot of it, Braga said. The data allows us to train a brain, an artificial brain, in a week instead of what it used to take us months to code.

And its so much more powerful. Traditional coding, which is essentially built from if then decision making rules, isnt capable of controlling complex tasks like self-driving cars or virtual assistants that require subtle assessments and decision making.

Were still at the dawn of AI, Braga said, or whats called narrow AI. In these early days, the field needs to be incorporating rules and standards to make sure that AI is used in ways that are ethical, unbiased and protect privacy. Oversight is needed at an international level that brings in a diversity of voices.

We need an alliance, almost like a United Nations for AI, she said.

The data used to train AI needs to be high quality, which for Braga means its accurate, representative or unbiased. It also should be monitored, anonymized so it cant be traced to its sources, and people are consenting in providing their information. Braga admittedly has a vested interest in this matter, as her companys business it to provide the data that companies use to train their AI. DefinedCrowds focus is speech and natural language processing.

In an infamous case of what can happen when AI is trained on bad data, Microsofts AI chatbot named Tay was quickly corrupted in 2016 when online users fed it racist, misogynistic language and conspiracy theories that the personal assistant parroted back to the public.

While were in narrow AI now, next steps are general AI and super AI, Braga said. As the technology matures, the different AI systems will be able to communicate with each other. Navigation, for example, will mix with voice interaction combined with text messaging. Home and work AI domains will talk together.

There are some people who say when you start interlinking so many things you will have an AI that may become sentient, Braga said, creating technology such as the personal assistant in the movie Her who is so lifelike and charming that the protagonist falls in love.

The super AI is when AI is smarter, thinking faster, thinking better than humans, Braga said. So that is the part that is science fiction, where the machine will take over the world. Were very far from that.

Women bring emotional intelligence that technology should have, Braga said. Its just that emotional intelligence component, that creativity, that warmth, that should resemble more a human that aspect does not come through built by men alone.

Data science is different from traditional software engineering. While the latter focused on programming languages, math and statistics, work in AI incorporates linguistics, psychology and ethics to a greater degree.

DefinedCrowd is trying to build a diverse workforce and has an office in Portugal, where Braga was born and raised. The companys staff is about 32% female, but its difficult to recruit qualified women, particularly for senior roles. Whats even tougher, Braga said, is finding women at her level for mentoring and support.

There are a handful of founder/CEOs at AI-focused companies, including Daphne Koller of insitro and Rana el Kaliouby of Affectiva. And only 22 women who hold the title of founder/CEO have taken their companies public among thousands of IPOs over the decades, according to Business Insider.

I always have a super hard time finding women to look up to because it just doesnt exist. Im basically paving my way by myself. I dont have role models, Braga said. Its really hard to not have a way to bounce ideas within a safe circle.

Read more here:

DefinedCrowd CEO Daniela Braga on the future of AI, training data, and women in tech - GeekWire

Read More..

Heres how Data Science & Business Analytics expertise can put you on the career expressway – Times of India

Today, Data Science & Business Analytics has gained the status of being all-pervasive across functions and domains. The mammoth wings of data and analytics are determining how we buy our toothpaste to how we choose dating partners, to how we lead our lives. Nearly 90% of all small, mid-size, and large organizations have adopted analytical capabilities over the last 5 years to stay relevant in a market where large volumes of data are recorded every day. They use it to formulate solutions to build analysis models, simulate scenarios, understand realities and predict future states.

According to a recent report by LinkedIn, herere some of the fastest-growing in-demand jobs of the past year and the next few years to come. Hiring for the roles of Data Scientist, Data Science Specialist, Data Management Analyst, Statistical Modeling has gone up by 46% since 2019. While there has been a surge in job openings, there are also some common myths co-existing with them. Contrary to popular belief, you dont need a programming background or advanced math skills to learn Data Science and Business Analytics skills.

This is so because most of the tools and techniques are easy to use and find ubiquitous application in all domains and professionals from vastly different industries like BFSI, Marketing, Agriculture, Healthcare, Genomics, etc. Good knowledge of statistics will need to be developed though. Also, Data Science and Business Analytics is based on the use of common human intelligence that can be applied to solve any and all industry problems. Hence, you dont need a Fourier series or advanced mathematical algorithms to build analytical models. Math learned till 10+2 level is good enough and can serve as a starting base for professionals in all domains.

Herere a few of the best high-paying jobs worth pursuing in this field:1. Data Scientist

Data scientists have to understand the challenges of business and offer the best solutions using data analysis and data processing. For instance, they are expected to perform predictive analysis and run a fine-toothed comb through unstructured/disorganized data to offer actionable insights. They can also do this by identifying trends and patterns that can help the companies in making better decisions.

2. Data Architect

A data architect creates the blueprints for data management so that the databases can be easily integrated, centralized, and protected with the best security measures. They also ensure that the data engineers have the best tools and systems to work with. A career in data architecture requires expertise in data warehousing, data modelling, extraction transformation and load (ETL), etc. You also must be well versed in Hive, Pig, and Spark, etc.

3. Data Analyst

A data analyst interprets data to analyse results to a specific business problem or bottleneck that needs to be solved. It is different from the role of a data scientist, as they are involved in identifying and solving critical business problems that might add immense value if solved. They interpret data and analyse it using statistical techniques, improve statistical efficiency and quality along with implementing databases, data collection tools, and data analytics strategies. They help with data acquisition and database management, recognize patterns in complex data sets, filter Data and clean by reviewing regularly and perform analytics reporting.

4. Data Engineer

Todays companies make considerable investments in data, and the data engineer is the person who builds, upgrades, maintains and tests the infrastructure to ensure it can handle algorithms thought up by data scientists. They Develop and maintain architectures, align them with business requirements, identify ways to ensure data efficiency and reliability, perform predictive and prescriptive modelling, engage with stakeholders to update and explain regarding analytics initiatives. The good news is that the need for data engineers spans many different types of industries. As much as 46% of all data analytics and data engineering jobs originate from the banking and financial sector, but business analyst jobs can be found in e-commerce, media, retail, and entertainment industries as well.

5. Database Administrator

The database administrator oversees the use and proper functioning of enterprise databases. They also manage the backup and recovery of business-critical information. Learning about data backup and recovery, as well as security and disaster management, are crucial to moving up in this field. Youll also want to have a proficient understanding of business analyst courses like data modelling and design. They build high-quality database systems, enable data distribution to the right users, provide quick responses to queries and minimise database downtime, document and enforce database policies, ensure data security, privacy, and integrity, among other responsibilities.

6. Analytics ManagerAn analytics manager oversees all the aforementioned operations and assigns duties to the respective team leaders based on needs and qualifications. Analytics managers are typically well-versed in technologies like SAS, R, and SQL. They must understand business requirements, goals, objectives, source, configure, and implement analytics solutions, lead a team of data analysts, build systems for data analysis to draw actionable business insights and keep track of industry news and trends. Depending on your years of experience, the average Data Science and Business Analyst salary may range between 3,50,000-5,00,000. The lower end is the salary at an entry-level with less than one year of work experience, and the higher end is the salary for those having 1-4 years of work experience.

As your experience increases over time, the salary you earn increases as well. A Business Analyst with 5-9 years of industry experience can earn up to Rs. 8,30,975. Whereas a Senior Business Analyst with up to 15-years experience earns close to Rs. 12,09,787. The location you are situated in plays a significant role when it comes to compensation. A Business Analyst in Bangalore or Pune would earn around 12.9% and 17.7% more than the national average. Hyderabad (4.2% less), Noida (8.2% less), Chennai (5.2% less).

For those interested in upskilling, Great Learning has emerged as one of Indias leading professional learning services with a footprint in 140 countries. Delivered 55 million+ learning hours across the world. Top faculty and a curriculum formulated by industry experts have helped learners successfully transition to new domains and grow in their fields. Offers courses in one of the most trending topics of today Data Science and Business Analytics, Artificial Intelligence, etc.Their PG program in Data Science and Business Analytics is offered in collaboration with The University of Texas at Austin and Great Lakes Executive Learning. It is becoming a sought-after course among working professionals across industries.

Herere a few highlights:

1. 11-month program: With a choice of online and classroom learning experience. The classroom sessions strictly follow all COVID safety measures.2. World #4 Rank in Business Analytics: Analytics Ranking (2020) for Texas University3. Hours of learning: 210+ hours of classroom learning content, 225+ hours of online learning content4. Projects: 17 real-world projects guided by industry experts and one capstone project towards the end of the course

See more here:

Heres how Data Science & Business Analytics expertise can put you on the career expressway - Times of India

Read More..

Yelp data shows almost half a million new businesses opened during the pandemic – CNBC

People order breakfast at Bill Smith's Cafe, after Texas Governor Greg Abbott issued a rollback of coronavirus disease (COVID-19) restrictions in McKinney, Texas, March 10, 2021.

Shelby Tauber | Reuters

Since the World Health Organization declared the coronavirus a pandemic one year ago Thursday, new Yelp data showed nearly a half million businesses opened in America during that time, an optimistic sign of the state of the U.S. economic recovery.

Between March 11, 2020 and March 1, 2021, Yelp has seen more than 487,500 new businesses listing on its platform in the United States. That's down just 14% compared with the year-ago period. More than 15% of the new entities were restaurant and food businesses.

The novel coronavirus, first discovered in China, is believed to have surfaced in Wuhan in late 2019, before spreading rapidly around the world, infecting 118 million people and causing 2.6 million deaths, according to data from Johns Hopkins University.

Virus mitigation efforts in nations all over the world, including the U.S., have ranged from full lockdowns to partial closures to reduced capacity of nonessential businesses and services. Masks and social distancing have been a hallmark of the pandemic. The economic damage from the crisis was swift.

However, according to data compiled by Yelp, which has released local economic impact reports all throughout pandemic, more than 260,800 businesses that had closed due to Covid restrictions, reopened from March 11, 2020 until March 1. About 85,000 of them were restaurant and food businesses.

Justin Norman, vice president of data science at Yelp, sees optimism in the numbers.

"As more and more Americans continue to get vaccinated, case counts continue to lower, and Congress' Covid relief bill that offers additional aid is distributed, we anticipate businesses that were once struggling over the last year will bounce back," Norman told CNBC. "We see this evidenced through the 260,000 businesses that have been able to reopen after temporarily closing."

Of the almost half million new businesses that have opened, about 59% were within the "professional, local, home and auto" category on Yelp.

"The number of new business openings particularly the high number of new home, local, professional and auto services businesses also shows great potential for those industries in the future," Norman said.

Yelp said that certain trends borne out of the pandemic may be here to stay. As consumers spend more time at home, Yelp noted an uptick in interest in home improvement. The company saw that average review mentions for home office renovation increased by 75% year over year and bathroom renovations rose by 80%.

"I anticipate that we'll still see people invest in higher-quality home offices or improving their homes," Norman said. "With warmer summer months coming and the number of vaccines being administered continuing to increase, people who aren't planning to return to the office this year may focus on more home improvement projects."

Yelp's new business data also shows the restrictions brought on by the pandemic accelerated the need for businesses to adapt by using technology and changing the ways they interact with their customers.

Of the new business openings, the number of food trucks climbed 12% and food delivery businesses were up 128%."The increase in food delivery services would have easily been predicted, although we may not have predicted they would stay on the rise a year later," Norman said.

He also said he was surprised by how local businesses have incorporated the tools technology offers. "It's been incredibly impressive and encouraging to see how much local businesses, both in large cities and smaller towns, have embraced technology to serve customers during this challenging time."

Yelp also saw changes in the ways companies specifically interacted with its app. In 2020, 1.5 million businesses updated their hours through Yelp, 500,000 indicated that they were offering virtual services, and more than 450,000 businesses crafted a custom message at the top of their page, to speak directly to customers.

In addition to the positive data about food delivery and restaurants, Norman was surprised to see some trends through the year that indicated a change in how consumers engaged with everyday life. Yelp saw that consumer interest in psychics increased 74% year over year and astrologers rose by 63%. Yelp measures consumer interest in page views, posts, or reviews.

"It was also surprising to find that consumer interest in notaries were up 52% on Yelp, as many federal and state rules allowed remote notarization," Norman said. "While Yelp data can't provide an in-depth look into what people were notarizing over the last year, Yelp data does show a trend of couples holding smaller, more intimate weddings, instead of more traditional large wedding celebrations, as well as the housing market seeing an astounding demand coupled with low interest rates and housing prices in certain markets."

A year on, it's clear that the businesses that have survived have had to find new ways to operate, and that many of the changes will be permanent."We've seen more and more businesses embrace app-enabled delivery, software tools like reservations and waitlist and consumer-oriented communications tools like the Covid health and safety measures. The digital local business is here to stay," Norman said.

See more here:

Yelp data shows almost half a million new businesses opened during the pandemic - CNBC

Read More..

Computer Science Meets Medicine in Drug Discovery | Womble Bond Dickinson – JDSupra – JD Supra

AI has the potential to revolutionize healthcare worldwide. In drug discovery, AI has already shown success. Sumitomo Dainippon Pharma and the UK-based AI company Exscientia developed DSP-1181 to treat obsessive compulsive disorder. In clinical trials for treatment of solid tumors, the clinical-stage, AI-powered biotech BERGs BPM31510 (ubidecarenone) has already been granted Orphan Drug Designation by the FDA to treat pancreatic cancer and epidermolysis bullosa, a rare skin disorder causing blistering. AI-led drug discovery for COVID-19 is also in the works.

AI can analyze vast amounts of data quickly and predict outcomes using unbiased algorithms less prone to human mistakes. AI can inspire drug discovery by searching and analyzing data on behalf of chemists and recommending subsequent steps. Machine learning (ML), an AI application that allows computer algorithms to improve automatically without explicit programming, can be used to discover molecules that bind to and modify target proteins. ML can optimize the synthesis of molecular compounds, factoring in the availability of chemical components needed. Combining AI with automated systems (e.g., robots), scientists can test more compounds in shorter time, with more accuracy and reproducibility. These computerized systems can collect and search large amounts of records, allowing AI to rapidly identify patterns not readily discernible to humans. Integration of AI in the drug discovery and testing pipeline would increase efficiency and reduce expense.

Since AI and ML require a large volume of data and networking capabilities, computing capacity is critical. In March 2020, IBM, The White House, and the US Department of Energy created the COVID-19 High Performance Computing (HPC) Consortium to provide supercomputing capacity for COVID-19-related research. In its first phase, the HPC Consortium, consisting of industry, government, and academia members worldwide, almost doubled its computing capacity. The amount of data available on COVID-19 also has grown substantially. In its second phase, the HPC Consortium is focusing on projects to help researchers identify potential near term therapies to improve the outcome of COVID-19 patients within a six-month timeframe. Projects include understanding and modeling of patient response to the COVID-19 virus, learning and validation of vaccine response models, evaluation of combination therapies using repurposed molecules, and epidemiological models.

AI and supercomputing capacity provided through the HPC Consortium allows researchers to rapidly search a vast volume of data to identify candidate drugs and compounds in drug discovery. A team at Michigan State University screened data from about 1,600 FDA-approved drugs and found at least two potential candidate antibacterial drugs, proavine and chloroxine, that might be combined and repurposed to treat COVID-19. A team of scientists from PostEra, an ML chemistry startup, processed more than 2,000 compounds from crowdfunded submissions in 48 hours, and quickly created databases with more than 14 billion molecules available worldwide, in their search for a compound to block a key protein of SARS-CoV-2, the virus that causes COVID-19.

AI-based drug discovery is within the purview of the FDA, which has outlined a multi-step drug development process including discovery and development, preclinical research, clinical research, FDA drug review, and FDA post-market drug safety monitoring. The FDAs Technology Modernization Action Plan will expand and modernize the Agencys technology information systems to ensure that rapid advances in product translate into meaningful results for American consumers and patients.

We need AI to help us because the low hanging fruits are long gone: we need to apply our very best approaches to deliver therapies for new generations. AI will bring about a new era in drug discovery and repurposing for new and complex diseases including COVID-19. With that, AI also brings conundrums for example, who is the inventor of a drug designed and discovered by AI? The US Patent & Trademark Office and the European Patent Office both rejected patent applications naming AI as an inventor, holding that only natural persons can be inventors. Most patent offices and courts around the world provided a similar traditional approach. Then, is the inventor the person who created the algorithm of AI? Or the person who designed the data to feed AI? Can nobody get a patent if an AI system is fully responsible for the invention without human involvement? AI will require us to adopt a new legal and regulatory approach sooner or later.

The rest is here:

Computer Science Meets Medicine in Drug Discovery | Womble Bond Dickinson - JDSupra - JD Supra

Read More..

Vanderbilt Data Science Institute hosts AI for conservation expert Tanya Berger-Wolf in virtual event on March 19 – Vanderbilt University News

A virtual discussion, Trustworthy AI for Wildlife Conservation: AI and Humans Combating Extinction Together, will take place on March 19 at 2 p.m. CT. Registration is required. The discussionishosted by theVanderbilt Data Science Institute.

Artificial intelligence is increasingly the foundation of decisions big and small, affecting the lives of individuals and the well-being of our planet. Tanya Berger-Wolf, professor of computer science engineering, electrical and computer engineering, and evolution, ecology and organismal biology at Ohio State University, will share how data-driven, AI-enabled decision processes can be deployed in the context of conservation. She will present an example of howsuch processes become trustworthy by opening opportunities for participation, supporting community-building, addressing inherent biases and providing transparent performance measures.

As a computational ecologist, Berger-Wolfs research is at the unique intersection of computer science, wildlife biology and social sciences. She creates computational solutions to address questions such as how environmental factors affect the behavior of social animals, including humans.

Berger-Wolf is the director of the Translational Data Analytics Institute at OSU and director and co-founder of Wild Me, a tech-for-conservation software nonprofit that brings together computer vision, crowdsourcing and conservation. Its key project Wildbook enabled the first-ever full species census of the endangered Grevys zebra through photographs taken by ordinary citizens in Kenya. The resulting numbers are now the official species census used by IUCN Red List. Wildbook also includes whales, sharks, giraffes and many more species.

Berger-Wolf holds a Ph.D. in computer science from the University of Illinois at Urbana-Champaign. She has received numerous awards for her research and mentoring including University of Illinois Scholar, UIC Distinguished Researcher of the Year, National Science Foundation CAREER, Association for Women in Science Chicago Innovator and the UIC Mentor of the Year.

TheVanderbiltData Science Institute acceleratesdata-driven research, promotescollaboration and trainsfuture leaders. The institute brings together experts in data science methodologies with leaders in all academicdisciplinesto sparkdiscoveriesandto study the impact of big data on society. Theinstitute is educating students in computational and statistical data science techniques to become future leaders in industry, government, academia and the nonprofit sector.This is the second discussionin thespring speaker series.

See original here:

Vanderbilt Data Science Institute hosts AI for conservation expert Tanya Berger-Wolf in virtual event on March 19 - Vanderbilt University News

Read More..

Scientists may have solved ancient mystery of ‘first computer’ – The Guardian

From the moment it was discovered more than a century ago, scholars have puzzled over the Antikythera mechanism, a remarkable and baffling astronomical calculator that survives from the ancient world.

The hand-powered, 2,000-year-old device displayed the motion of the universe, predicting the movement of the five known planets, the phases of the moon and the solar and lunar eclipses. But quite how it achieved such impressive feats has proved fiendishly hard to untangle.

Now researchers at UCL believe they have solved the mystery at least in part and have set about reconstructing the device, gearwheels and all, to test whether their proposal works. If they can build a replica with modern machinery, they aim to do the same with techniques from antiquity.

We believe that our reconstruction fits all the evidence that scientists have gleaned from the extant remains to date, said Adam Wojcik, a materials scientist at UCL. While other scholars have made reconstructions in the past, the fact that two-thirds of the mechanism are missing has made it hard to know for sure how it worked.

The mechanism, often described as the worlds first analogue computer, was found by sponge divers in 1901 amid a haul of treasures salvaged from a merchant ship that met with disaster off the Greek island of Antikythera. The ship is believed to have foundered in a storm in the first century BC as it passed between Crete and the Peloponnese en route to Rome from Asia Minor.

The battered fragments of corroded brass were barely noticed at first, but decades of scholarly work have revealed the object to be a masterpiece of mechanical engineering. Originally encased in a wooden box one foot tall, the mechanism was covered in inscriptions a built-in users manual and contained more than 30 bronze gearwheels connected to dials and pointers. Turn the handle and the heavens, as known to the Greeks, swung into motion.

Michael Wright, a former curator of mechanical engineering at the Science Museum in London, pieced together much of how the mechanism operated and built a working replica, but researchers have never had a complete understanding of how the device functioned. Their efforts have not been helped by the remnants surviving in 82 separate fragments, making the task of rebuilding it equivalent to solving a battered 3D puzzle that has most of its pieces missing.

Writing in the journal Scientific Reports, the UCL team describe how they drew on the work of Wright and others, and used inscriptions on the mechanism and a mathematical method described by the ancient Greek philosopher Parmenides, to work out new gear arrangements that would move the planets and other bodies in the correct way. The solution allows nearly all of the mechanisms gearwheels to fit within a space only 25mm deep.

According to the team, the mechanism may have displayed the movement of the sun, moon and the planets Mercury, Venus, Mars, Jupiter and Saturn on concentric rings. Because the device assumed that the sun and planets revolved around Earth, their paths were far more difficult to reproduce with gearwheels than if the sun was placed at the centre. Another change the scientists propose is a double-ended pointer they call a Dragon Hand that indicates when eclipses are due to happen.

The researchers believe the work brings them closer to a true understanding of how the Antikythera device displayed the heavens, but it is not clear whether the design is correct or could have been built with ancient manufacturing techniques. The concentric rings that make up the display would need to rotate on a set of nested, hollow axles, but without a lathe to shape the metal, it is unclear how the ancient Greeks would have manufactured such components.

The concentric tubes at the core of the planetarium are where my faith in Greek tech falters, and where the model might also falter, said Wojcik. Lathes would be the way today, but we cant assume they had those for metal.

Whether or not the model works, more mysteries remain. It is unclear whether the Antikythera mechanism was a toy, a teaching tool or had some other purpose. And if the ancient Greeks were capable of such mechanical devices, what else did they do with the knowledge?

Although metal is precious, and so would have been recycled, it is odd that nothing remotely similar has been found or dug up, Wojcik said. If they had the tech to make the Antikythera mechanism, why did they not extend this tech to devising other machines, such as clocks?

Read the original:

Scientists may have solved ancient mystery of 'first computer' - The Guardian

Read More..