Page 2,598«..1020..2,5972,5982,5992,600..2,6102,620..»

Media advisory: Kevin Leicht to testify before congressional subcommittee about disinformation – University of Illinois News

CHAMPAIGN, Ill. Kevin T. Leicht, a professor of sociology at the University of Illinois Urbana-Champaign, will testify before the U.S. House of Representatives Science, Space and Technology Committees Subcommittee on Investigations and Oversight on Tuesday, Sept. 28.

Leichts remarks will focus on the role of internet disinformation in fomenting distrust of experts and established scientific knowledge, in particular with respect to COVID-19 vaccines and miracle cures.

Leicht and Alan Mislove, of Northeastern University, and Laura Edelson, of New York University, will co-present The Disinformation Black Box: Researching Social Media Data at 10 a.m. EDT. They will testify remotely via videoconferencing.

The hearing will be livestreamed at https://science.house.gov/hearings.

Leichts research centers on the political and social consequences of social inequality and cultural fragmentation. His current work explores the growing skepticism toward scientists and attacks on experts and established scientific knowledge spread via social media.

Leicht is the primary investigator on the National Science Foundation-funded project RAPID: Tracking and Network Analysis of the Spread of Misinformation Regarding COVID-19.

U. of I. faculty members Joseph Yun, the director of data science research services and a professor of accounting in the Gies College of Business; and Brant Houston, the Knight Chair Professor in Investigative and Enterprise Reporting in the College of Media; are co-primary investigators on the project.

See original here:

Media advisory: Kevin Leicht to testify before congressional subcommittee about disinformation - University of Illinois News

Read More..

Heard on the Street 9/27/2021 – insideBIGDATA

Welcome to insideBIGDATAs Heard on the Street round-up column! In this new regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!

Amplitude Filing for Direct Listing on Nasdaq. Commentary by Jeremy Levy, CEO at Indicative.

To some extent, Amplitudes valuation and filing are wins for everyone in product analytics, including Indicative. Amplitudes achievement is a massive validation for our market. If the company launched today, though, it would not have this same level of success because the market is clearly transitioning to the cloud data warehouse model; something Amplitude is simply not compatible with. And while this model has been written about at length by firms like Andreesen and Kleiner, the more tangible predictor of this trend is the continued acceleration of growth at Snowflake and other cloud data providers like Amazon and Google. Amplitude has been able to leverage strong word of mouth and an easy integration to this point. But being incompatible with what has rapidly become accepted as the ideal way to build a data infrastructure meaning products that can interface directly with the cloud data warehouse is a serious threat to their continued growth. Amplitudes requirements for replicating and operationalizing customers data reflect a decades-old approach. Their solution is built for today but not for tomorrow. In time, especially given the increased scrutiny of shareholders and earnings reports, the shortcomings of Amplitudes approach will catch up with them.

A New Culture of AI Operationalization is Needed to Bring Algorithms from the Playground to the Business Battleground.Commentary by Mark Palmer, SVP, Engineering at TIBCO.

Data science is taking off and failing at the same time. A recent survey by NewVantage Partners found that 92% of companies are accelerating their investment in data science, however, only 12% of these companies deploy artificial intelligence (AI) at scaledown from the previous year. Companies are spending more on data science, but using less of it, so we need to bring AI from the playground to the battleground. The problem is that most firms have yet to establish a culture of AI operationalization.Technology, while not the answer, helps put wind behind the sails of that cultural change. For example, Model operationalization (ModelOps) helps AI travel the last mile from the data science laboratory, or playground, to the business user, or the battlegroundlike an Uber Eats for algorithms. ModelOps makes it easy to understand how to secure and manage algorithms deployment, allowing business leaders to get comfortable with AI. It also encourages collaboration between data scientists and business leaders, allowing them to bond as a team. The other benefit of a culture of AI operationalization is bias identification and mitigation. Reducing bias is hard, but the solution is often hidden in plain sightAI operationalization teams help firms more easily assess bias and decide how to act to reduce it. A culture of AI operationalization helps data scientists focus on research and deliver algorithms to the business in a transparent, safe, secure, unbiased way.

Strong DevOps Culture Starts with AIOps and Intelligent Observability. Commentary by Phil Tee, CEO and founder of Moogsoft.

DevOps is a culture about the collective we and building a blameless, team-centric workplace. But DevOps must be supported by tools that enable collaboration on solutions that will impact the collective whole. AIOps with intelligent observability helps shore up a strong DevOps culture by encouraging collaboration, trust, transparency, alignment and growth. By leveraging AIOps with intelligent observability, DevOps practitioners remove individual silos and give teams the visibility they need to collaborate on incidents and tasks. By getting their eyes on just about everything, employees can connect across teams, tools and systems to find the best solutions. And professional responsibilities seamlessly transfer between colleagues. AI also automates the toil out of work, so teams leave menial tasks at the door, do more critical thinking and bond over building better technologies. AIOps with intelligent observability enhances the transparency and collaboration of your DevOps culture, encourages professional growth and shuts down toxic workplace egos to create a more innovative, more agile organization.

Machine Learning Tech Makes Product Protection Accessible to Retailers of All Sizes. Chinedu Eleanya, founder and CEO of Mulberry.

More and more companies are turning to machine learning but often too late in their development. Yes, machine learning can open up new product opportunities and increase efficiency through automation. But to truly take advantage of machine learning in a tech solution, a business needs to plan for that from the beginning. Attempting to insert aspects of machine learning into an existing product can, at worst, result in features for the sake of machine learning features and, at best, require rebuilding aspects of the existing product. Starting early with machine learning can require more upfront development but can end up being the thing that separates a business from existing solutions.

Artificial intelligence risks to privacy demand urgent action. Patricia Thaine, CEO ofPrivate AI.

The misuse of AI is undoubtedly one of the most pressing human rights issues the world is facing todayfrom facial recognition for minority group monitoring to the ubiquitous collection and analysis of personal data. Privacy by Design must be core to building any AI system for digital risk protection. Thanks to excellent data minimization tools and other privacy enhancing technologies that have emerged, even the most strictly regulated data [healthcare data] are being used to train state-of-the-art AI systems in a privacy-preserving way.

Why iPaaS is a fundamental component of enterprise technology stacks. Andr Lemos, Vice President of Products for iText.

Integration Platform-as-a-Service (iPaaS) is rapidly becoming a fundamental component of enterprise technology stacks. And it makes total sense. IT organizations worldwide are dealing with an increasing number of software systems. Whether they are installed within the corporate network, in a cloud service providers infrastructure, or offered by a third-party SaaS provider, business groups want to use more software. And that creates a lot of fragmentation and complexity, especially when those systems need to be connected together or data needs to be shared between them.Selecting an iPaaS platform has as much to do with the features as the ecosystem. Without a healthy catalog of systems to choose from, the platform is practically useless. Remember that the goal of an iPaaS platform is to make connecting disparate systems easier and simpler. Before there was iPaaS, companies had to create their own middleware solutions which took valuable engineering resources to both develop and maintain. With iPaaS, developers and IT resources can simply select systems to include in their workflow.

The data scientist shortage and potential solutions. Commentary by Digital.ais CTO and GM of AI & VSM Platform, Gaurav Rewari.

More than a decade ago, the McKinsey Global Institute called out an impending data scientist shortage of more than 140,000 in the US alone. Since then, the projections for a future shortfall have only become more dire. A recent survey from S&P Global Market Intelligence and Immuta indicates that 40% of the 500+ respondents who worked as data suppliers said that they lacked the staff or skills to handle their positions. Further, while the Chief Data Officer role was gaining prominence, 40% of organizations did not have this position staffed. All of this against the backdrop of increasing business intelligence user requests from organizations desperate to use their own data as a competitive advantage.Addressing this acute shortage requires a multi-faceted approach not least of which involves broadening the skills of existing students and professionals to include data science capabilities through dedicated data science certificates and programs, as well as company-sponsored cross-training for adjacent talent pools such as BI analysts. On the product front, key capabilities that can help alleviate this skills shortage include: (i) Greater self-service capabilities so that business users with little-to-no programming expertise and knowledge of the underlying data structures can still ask questions using a low code or no code paradigm, (ii) Pre-packaged AI solutions that have all the data source integrations, pipeline, ML models and visualization capabilities prebuilt for specific domains (eg: CRM, Finance, IT/DevOps) so that business users have the ability to obtain best practice insights and predictive capabilities in those chosen domains. When successfully deployed, such capabilities have the power to massively expand the reach of a companys data scientists many times over.

Unemployment Fraud Is Raging and Facial Recognition Isnt The Answer. Commentary by Shaun Barry, Global Lead for Government and Healthcare at SAS.

Since March 2020, approximately $800 billion in unemployment benefits has been distributed to over 12 million Americans, reflecting the impact of COVID-19 on the U.S. workforce. While unemployment benefits have increased, so have bad actors taking advantage of these benefits. It is estimated that between $89 billion to $400 billion in unemployment fraud has been distributed. To combat fraudsters and promote equitable access, the Administration passed The American Rescue Plan Act, which provides $2 billion to the U.S. Dept. of Labor. However, two technology approaches the government has been pursuing to combat UI fraud facial recognition and data matching introduce an unintended consequence of inequities and unfairly limiting access to unemployment benefits for minority and disadvantaged communities.For example, facial recognition has struggled to accurately identify individuals with darker skin tones and most facial recognition requires the citizen to own a smartphone which impacts certain socioeconomic groups more than others. Data matching and identity solutions rely on credit history-based questions such as type of car owned, previous permanent addresses, strength of credit, existence of credit and banking history (all requirements that negatively impact communities of color, young, unbanked, immigrants, etc.). There is a critical need to evaluate the value of a more holistic approach that draws on identity analytics from data sources that do not carry the same type of inherent equity and access bias. By leveraging data that utilizes sources with fewer inherent biases such as digital devices, IP addresses, mobile phone numbers and email addresses, agencies can ethically combat unemployment fraud. Data-driven identity analytics is key to not only identifying and reducing fraud, but also reducing friction for citizens applying for legitimate UI benefits. The analytics happens on the backend, requiring the data the user has provided and nothing more. Only when something suspicious is flagged would the system introduce obstacles, like having to call a phone number to verify additional information. By implementing a more holistic, data approach, agencies can avoid the pitfalls of bias and inequity that penalize communities who need UI benefits the most.

How boards can mitigate organizational AI risks. Commentary by Jeb Banner, CEO of Boardable.

AI has proven to be beneficial for digital transformation efforts in the workplace. However, few understand the risks of implementing AI, including tactical errors, biases, compliance issues and security, to name a few. While the public sentiment of AI is positive, only 35% of companies intend to improve the governance of AI this year.The modern boardroom must understand AI, including its pros and cons. A key responsibility of the board of directors is to advise their organization on implementing the AI technology responsibly while overcoming the challenges and risks associated with it. To do so, the board should deploy a task force dedicated to understanding AI and how to use the technology ethically. The task force can work in tandem with technology experts and conduct routine audits to ensure AI is being used properly throughout the organization.

How a digital workforce can meet the real-time expectations of todays consumer. Commentary by Bassam Salem, Founder & CEO at AtlasRTX.

Consumer expectations have never been higher. We want digital. We want on-demand. We want it on our mobile phone. We want to control our own customer journey and expect that immediacy 24 hours a day because business hours no longer exist. Thanks to the likes of Amazon and Tesla, the best experiences are the ones with the least friction, most automation, and minimal need for human intercedence. Interactions that rely solely on a human-powered team are not able to meet this new demand, so advanced AI technology must be implemented to augment and support staff. AI-powered digital assistants empower consumers to find answers on their terms, in their own language, at any time of day. These complex, intelligent chatbots do more than just answer simple questions, they connect with customers through social media, text message, and webchat by humanizing interactions through a mix of Machine Learning (ML) and Natural Language Processing (NLP). Todays most advanced chatbots are measured by intelligence quotient (IQ) and emotional intelligence (EQ), continually learning from every conversation. As new generations emerge that are equally, if not more comfortable interacting with machines, companies must support their human teams with AI-powered digital colleagues that serve as the frontline to deliver Real-Time Experiences (RTX) powered and managed by an RTX platform that serves as the central nervous system of the augmented digital workforce.

What sets container attached and container native storage apart. Commentary by Kirby Wadsworth ofionir.

The advent of containers has revolutionized how developers create and deliver applications. The impact is huge; weve had to re-imagine how to store, protect, deliver and manage data.The container-attached (CAS) or container-ready approach is attractive because it uses existing traditional storage, promising the ability to reuse existing investments in storage and may make sense as an initial bridge to the container environment.Whats different about container-native storage (CNS) is that it is built for the Kubernetes environment. CNS is asoftware-defined storagesolution that itself runs in containers on a Kubernetes cluster.Kubernetes spins up more storage capacity, connectivity services, and compute servicesas additional resources are required.It copies and distributes application instances. If anythingbreaks, Kubernetes restarts somewhere else. As the orchestration layer, Kubernetes generally keeps things running smoothly. CNS is built to be orchestrated,butcontainer-ready or container-attached storage isnteasily orchestrated.Organizations have many storage options today, and they need more storage than ever. With containers added to the mix, the decisions can become harder to make. Which approach will best serve your use case? You need to understand the difference between container-attachedand container-native storage to answer this question. Carefully consideryouneeds and your managementcapabilities, andchoose wisely.

Data Quality Issues Organizations Face. Commentary by Kirk Haslbeck, vice president of data quality atCollibra.

Every company is becoming a data-driven business collecting more data than ever before, establishing data offices, and machine learning. But there are also more data errors than ever before. That includes duplicate data, inaccurate or inconsistent data, and data downtime. Many people start machine learning or data science projects, but end up spending the bulk of their time (studies suggest around 80%) trying to find and clean data, rather than engaging in productive data science activities. Data quality and governance have traditionally been seen as a necessity rather than a strategic pursuit. But a healthy data governance and data quality program equates to more trust in data, greater innovation, and better business outcomes.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1

Go here to read the rest:

Heard on the Street 9/27/2021 - insideBIGDATA

Read More..

New Business Institute at UT Austin Will Specialize in Sports Analytics – Diverse: Issues in Higher Education

Students will soon have the chance to study the art of sports analytics at a new Business of Sports Institute withinthe University of Texas at Austin's McCombs School of Business, established by a $1.4 million gift from Accenture, a Fortune Global 500 company that specializes in IT services and consulting.

"This partnership hinges on the power of Accenture's capabilities and proven track record of turning insights into revenue-generating businesses," Berger said. "That coupled with UTs dedication to athletic excellence and McCombs position as a leading business program, creates an unbeatable formula for pushing the envelope in sports analytics, sports science and sports business.

The new institute will create curriculum and applied research opportunities related to sports analytics and business, offering tracks focusing on data science and analytics, entrepreneurship and the science of high performance. According to an April 2020 report in Forbes, the sports analytics market is expanding at a rate of more than 30% and is expected to reach $4.6 billion by 2025.

There is no other major business school in the country bringing on-field, on-court performance analytics into the curriculum, into the research lab, and to sports industry leaders like we are, said Ethan Burris, faculty director of the McCombs Schools Center for Leadership and Ethics. Talent management, performance metrics, sports-adjacent verticals and branding there are a ton of topic areas we are poised to tackle.

Originally posted here:

New Business Institute at UT Austin Will Specialize in Sports Analytics - Diverse: Issues in Higher Education

Read More..

How AI is Transforming The Race Strategy Of Electric Vehicles – Analytics India Magazine

Formula E has grown in popularity as a sustainable sport that pioneers advancements in electric car technology. Its premise is not only that the cars are all electric but also that the 11 teams, each with two drivers, compete in identically configured, battery-powered electric race cars.

How can we use the data to aid Formula Es racing strategy? Vikas Behrani

Vikas Behrani, Vice President Data Science at Genpact, spoke at the Deep Learning Devcon 2021, organized by The Association of Data Scientists. In his session, he discussed Lap Estimate Optimizer: Transforming race-day strategy with AI and gave insights into how a Formula E race is not only about driver ability and technique but also about data-driven strategy.

(Source: Vikas Behrani | DLDC 2021)

Behrani went into greater detail about the Formula E races characteristics. It is a racing series dedicated entirely to electric cars. Within each season, extensive performance data on racing dynamics, the driver, and the car itself from the previous seven seasons provide a great foundation for forecasting/simulation employing cutting-edge optimization and data science methods.

A wealth of available data ranging from past driver performances, lap times, standings in previous races, weather, and technical information about the car such as the battery, tyres, and engine, data scientists can forecast the number of laps a car can complete by quantifying behavioural characteristics such as the drivers risk-taking appetite and other traits such as track information and weather that can affect a cars performance. Additionally, Behrani discussed how this relates to the other industry how similar models for racing strategy can be applied to banking, insurance, and other manufacturing sectors.

(Source: Vikas Behrani | DLDC 2021)

Vikas stated during his discussion of the model that the objective of this exercise is to define the process for forecasting the number of laps a car would complete in 45 minutes during a future race using historical data. He then described the model for the Lap Estimate Optimizer. To forecast the number of laps completed at the end of each race, an ensemble model is developed using a combination of an intuitive mathematical model and an instinctual deep learning model.

There are numerous features such as lap number, previous lap time, fastest qualifying time, track length, and projected time. These characteristics will be fed into a neural network model used to forecast the lap time. We constructed and compared a total of 32 models.

Behrani went into detail regarding the steps involved in LEO.

Step-1: Collecting historical data on the quickest lap time.

Step-2: Collecting historical data on the fastest lap time of rank-1 drivers.

Step-3: Normalize the quickest lap time obtained in step 1 by subtracting it from the matching numbers in step 2.

Step-4: Using the distribution matrix from step 3, simulate data that follows the same distribution.

Step-5: In step 4, add the quickest lap time from qualifying and practise sessions.

Step-6: Add the values in the above matrix row by row until we reach 45 minutes.

Behrani later discussed the predictions for Santiago, Mexico, and Monaco and how the effort on the track translates into market impact. Finally, he went on to illustrate several use cases.

This exercise aims to determine how to use previous data to forecast how many laps an automobile would finish in 45 minutes. An intuitive mathematical model and an instinctual deep learning model are combined to anticipate the number of laps at the end of each race.

Nivash has a doctorate in Information Technology. He has worked as a Research Associate at a University and as a Development Engineer in the IT Industry. He is passionate about data science and machine learning.

Excerpt from:

How AI is Transforming The Race Strategy Of Electric Vehicles - Analytics India Magazine

Read More..

Metropolitan Chicago Data-science Corps to partner with area organizations on projects – Northwestern University NewsCenter

Five Illinois universities, led by Northwestern University, have establishedtheMetropolitan Chicago Data-science Corps (MCDC)to help meet the data science needs of the Chicago metropolitan area. The interdisciplinary corps will assista wide range of community-based groups in taking advantage ofincreasing data volume and complexity while also offering data science students opportunities to apply their skills.

The amount of data produced in society today can be overwhelming to nonprofit organizations, especially those without pertinent resources, but data can help them fulfill their missions, said NorthwesternsSuzan van der Lee, who spearheaded the initiative and is aprofessor of Earth and planetary sciences in theWeinberg College of Arts and Sciences.

The MCDC team is led by 11 co-directors. Northwesterns co-directors are Van der Lee, Michelle Birkett, Bennett Goldberg and Diane Schanzenbach. The MCDC team also includes Northwestern faculty involved in new data science minor and major programs offered by Weinberg College and theMcCormick School of Engineering, including Arend Kuyper and Jennie Rogers.

In addition to Northwestern, the partner universities are DePaul, Northeastern Illinois and Chicago State universities and theSchool of Information Sciences (iSchool) of the University of Illinois at Urbana-Champaign. The corps will be supported by a new grant from the National Science Foundation of nearly $1.5 million over three years.

Requests for data servicesare now being acceptedfrom nonprofit and governmental organizations in the metropolitan Chicago area. Data challenges in the areas of theenvironment, health and social well-being are of particular interest to the corps.

We are sharing our expertise to help community organizations use data to their advantage.

The city of Chicago, the Greater Chicago Food Depository, Howard Brown Health and The Nature Conservancy are just some of the organizations the MCDC will be working with as community partners.

With this new data science corps, we are sharing our expertise to help community organizations use data to their advantage, Van der Lee said. And interdisciplinary teams of Chicagoland data science students will receive hands-on training on how to partner with the community organizations with the goal of completing projects with real-world impact.

Van der Lee is a member of theNorthwestern Institute on Complex Systems (NICO), which will provide administrative infrastructure to the MCDC.

The MCDC aims to strengthen the national data science workforce and infrastructure by integrating the needs of community organizations with academic learning. Teams of undergraduate students at the partner universities will work on data science projects provided by the organizations as part of the students curriculum.

Despite a global pandemic, we have seen our regions technology industry flourish an achievement that is undoubtedly thanks to dynamic partnerships forged between our incredible city and state universities, Chicago Mayor Lori E. Lightfoot said. The MCDC is the latest of such partnerships, and it will deepen our regional strength in data science while simultaneously enhancinghow nonprofit organizations and government bodies utilize data-driven programs to strengthen our communities.

The Metropolitan Chicago Data-science Corps unites diverse students and faculty across institutions anddisciplines. At Northwestern, involved faculty come from six schools, including Weinberg, McCormick and Northwestern University Feinberg School of Medicine.

Northwestern undergraduate students taking the practicum course can be from any discipline and will have had at least a years worth of data science courses. Masters students can volunteer to be project managers. The first of several practicum courses at Northwestern will be offered in the winter quarter this academic year. Students completing the course then will be eligible for paid summer internships in which they can work more in-depth on projects with students from partner universities.

In the third year of the grant, the MCDC plans work with faculty at a city college and a community college to implement its curriculum there, further expanding data science education in metropolitan Chicago.

See the original post:

Metropolitan Chicago Data-science Corps to partner with area organizations on projects - Northwestern University NewsCenter

Read More..

Argentine project analyzing how data science and artificial intelligence can help prevent the outbreak of Covid-19 | Chosen from more than 150…

Data science and artificial intelligence can help prevent outbreaks COVID-19? This is the focus of the research of an Argentine project, coordinated by the Interdisciplinary Center for the Studies of Science, Technology and Innovation (Cecti), which which It was selected from more than 150 proposals from around the world and will receive funding from Canada and Sweden.

The project is called Arphai (in Argentinean English for General Research on Data Science and Artificial Intelligence for Epidemic Prevention) and its goal is to develop tools, models and recommendations that help predict and manage such epidemic events as Covid-19, but are replicable with other viruses.

The initiative originated from Ciecti a civic association set up by the National University of Quilmes (UNQ) and the Latin American College of Social Sciences (FLACSO Argentina) and was selected along with eight other proposals based in Africa, Latin America and Asia. In Latin America only two were selected: Arphai in Argentina and another project in Colombia.

Based on this recognition, it will be funded by the International Development Research Center (Idrc) in Canada and the Swedish International Development Cooperation Agency (Sida), under the Global South AI4COVID programme.

The project is coordinated by Ciecti and involves the Planning and Policy Secretariat of the Ministry of Science, Technology and Innovation and the National Information Systems Directorate of the Access to Health Secretariat of the Argentine Ministry of Health.

Researchers are also working on the initiative, Technical teams from the public administration and members of 19 institutions, including universities and research centers, in six Argentine provinces and the city of Buenos Aires.

The main goal is to develop technological tools based on artificial intelligence and data science, which are applied to electronic medical records (EHR), and allow to anticipate and detect potential epidemic outbreaks and favor preventive decision-making in the field of public health regarding Covid-19.

Among the tasks carried out, progress was also made on a pilot project to implement the electronic medical record designed by the Ministry of Health (Health History Integrated HSI) in the health networks of two municipalities on the outskirts of Buenos Aires, in order to synthesize learning and learning. Design an escalation strategy at the national level.

Another goal is to prioritize the perspective of equity, particularly gender, a criterion expressed in efforts to mitigate biases in developed prototypes (models, algorithms), in analysis and concern for the databases used and their diverse configuration. Teams: 60% of the project is made up of women, many of whom are in leadership positions.

Arphai operates under strict standards of confidentiality, protection and anonymity of data and is endorsed by the Ethics Committee of the National University of Quilmes (UNQ).

View original post here:

Argentine project analyzing how data science and artificial intelligence can help prevent the outbreak of Covid-19 | Chosen from more than 150...

Read More..

Business of Sports Institute at UT McCombs School Founded by Gift from Accenture – UT News – UT News | The University of Texas at Austin

AUSTIN, Texas A new sportseducationandresearchventureunlike any other in the United States that will meet a pressing need in the sports industry is coming to The University of Texas at Austin.

Accenture has donated the founding $1.4 million gift to establish a Business of Sports Institute in the McCombs School of Business at UT Austin.The new institute will bring together all the advantagesof atop business schoolatamajor research institution withanelite sports programand combinesthosewiththeexpertise in sports business consultingand analyticsthat Accenture brings to thismultiyearpartnership.

There is no othermajorbusiness school in the countrybringingon-field, on-court performance analyticsintothe curriculum, intothe research lab,and to sports industry leaderslike we are, said Ethan Burris,facultydirector of theMcCombs SchoolsCenter for Leadership and Ethics, in which the new institute will be housed. Talent management, performance metrics, sports-adjacent verticals and branding there are a ton oftopic areas weare poisedto tackle.

The newBusiness of Sports Institutewillcreate:

Researchwithin the institute is already underwayin mens and womens basketball, with plans to ramp up quicklyto other sports.

This partnership is a colossal boon to our research, Burris said.We now have the financial resources to hire unique and specialized talent for example,experts in biomechanicsorin data visualization.AndAccenture isdevoting significant talent and expertise project managers, data scientists and other engineers.

Usingdata analyticsin new waysbecame a worldwide obsession after the publication in2003ofMoneyball,which chronicledhow Oakland As manager Billy Beane got his bottom-of-the league team to the playoffsusing sabermetricsto hireundervalued butwinningplayerson the smallest budget in the league. His success spurred anew generationofsportsdataanalyticsadvances.

Jon Berger, managing director and U.S. sports analytics lead at Accenture, was a part of this revolution. When Moneyball came out, Berger was working as an NFL and college football analyst for Fox Sports, before moving to ESPN and CBS. Berger had identified early-on the potential for data and gaming to proactively inform sports predictions. Now, nearly 20 years post-Moneyball and the universal application of big data, Accenture is committed to promoting the expansion of the emerging and rapidly developing field of sports analytics.

This partnership hinges on the power of Accentures capabilities and proven track record of turning insights into revenue-generating businesses, Berger said. That coupled with UTs dedication to athletic excellence and McCombs position as a leading business program, creates an unbeatable formula for pushing the envelope in sports analytics, sports science and sports business.

Globally, thesports analytics market size is expected to reach $4.6 billion by 2025, expanding at a rate more than 30%, according to an April 2020report inForbes.And only a small portion of revenue-generating teams in the world have dedicated business intelligence groups, Burris said.

Not only is this a chance for our 580 student athletes to enhance theircraftthrough data analysis, but the minor in sports analytics will be incredibly attractive for students in a wide variety of majors, from kinesiology to communications, said Christine Plonsky, UT executive senior associate athletics director.

McCombs hasbecome a hub for sports data analytics innovation since its hiring in2019of Kirk Goldsberry, the New York Times best-selling author of Sprawlball and a pioneer in the world of basketball analytics. Burris hired Goldsberryto develop coursework,teach sports analytics and oversee sports analytics researchatMcCombs.Goldsberrys groundbreaking insights have already landed him jobs as vice president of strategic research for the San Antonio Spurs, as the first-ever lead analyst for Team USA Basketball, and as a staff writer for ESPN. But his new job as executive director of the Business of Sports Institute accompanied by UT vice president and athletics director Chris Del Conte in the role of strategic adviser is where he says it all comes together.

When it comes to sports, theres no university in the world where Id rather be thinking about this, said Goldsberry.UT is uniquely positioned with its size and passion to blossom into this hub for sports academic work. If theres such a thing as a perfect university setting for elite sports research, its right here in Austin, Texas.

Click here for a video sound bite with Business of Sports Institute Executive Director Kirk Goldsberry.

Read this article:

Business of Sports Institute at UT McCombs School Founded by Gift from Accenture - UT News - UT News | The University of Texas at Austin

Read More..

‘I Want The Folks in Our Society to Be Data Literate So That We Are Making Good Decisions Together for the Good of the World,’ Says Professor…

What makes a roller coaster thrilling or scary? How do you find a unique restaurant when youre planning to go out for dinner? Although seemingly unrelated, NCState College of Education Professor Hollylynne Lee, Ph.D., shared that both of these questions demonstrate the importance of understanding statistics and data science.

Lee is a professor of mathematics and statistics education, a senior faculty fellow with the colleges Friday Institute for Educational Innovation and one of three 2022 finalists for Baylor Universitys highly prestigious Robert Foster Cherry Award for Great Teaching. During her Sept. 23 Cherry Award Lecture, entitled Data Moves and Discourse: Design Principles for Strengthening Statistics Education, she discussed the need to strengthen statistics education and the ways she has used her research to create learning opportunities for both students and teachers.

Through audience participation, Lee highlighted that understanding of data and statistics has far-reaching implications beyond the classroom, with people sharing that theyve used data in a variety of scenarios in their own lives, from buying a car and negotiating salaries to deciding where to live and monitoring COVID case numbers.

We need data literate citizens. I want my neighbors and the folks in our society to be data literate so that we are making good decisions together for the good of the world, Lee said.

To produce those data literate citizens, Lee has devoted her career to helping create lessons that provide students with opportunities to access different mathematical and statistical ideas, keeping in mind the tools available to teachers, the questions that will guide their thinking and the ways that students might interact with the information and each other.

When engaging in purposeful design to create exceptional learning opportunities for students, Lee said that the two most critical aspects are data moves the actions taken to produce, structure, represent, model, enhance or extend data and discourse.

These two things coming together are really what I care a lot about and use in instructional design related to statistics education, she said.

When engaging students in data analysis, Lee said its important the data they are looking at is real. Many textbooks have fake datasets that look realistic, but to truly understand data, the sets need to be large, multivariate and sometimes even messy, she said.

With engaging context rooted in reality, educators can then use appropriate tools to facilitate data moves and visualizations to help students uncover links between data representations.

Using data dashboards related to restaurants in the Raleigh and Durham areas and roller coasters at theme parks in several nearby states, Lee demonstrated how the inclusion of multiple variables into various data analyses can help students draw conclusions about data points.

For example, in a video recorded while Lee was working with students in a middle school classroom, she showed how the addition of data related to the material a roller coaster was made of to a scatter plot that already showed data related to speed and height helped students come to the conclusion that wooden roller coasters tend to have shorter drops and slower speeds compared to steel roller coasters.

Although it may seem counterintuitive to the traditional idea of starting off simple when introducing new ideas, the video demonstrates that more complex data sets can actually help enhance student understanding.

We do not live in a univariable or bivariable world. We live in a multivariable world, and our students are very adept at reasoning that way if we give them the opportunity, Lee said. We know from a lot of research that special cases in data can help students make sense of the aggregate. Instead of explaining what I wanted [the students] to see, I made the graph more complex. I added a third variable so that it could contextualize something about those roller coasters for the students, and it worked.

To bring data lessons into classrooms for students, Lee noted its important for pre-service and practicing teachers to have professional development opportunities surrounding statistics, as many did not have opportunities to learn about the subject in their own K-12 careers.

She discussed how she developed courses for the College of Education that ultimately attracted graduate students from across multiple colleges within NCState University, and how she ultimately applied her data course design principles to four online professional learning courses offered through the Friday Institute for Educational Innovation that have reached more than 6,000 educators in all 50 states and more than 100 countries.

Her Enhancing Statistics Teacher Education with E-Modules ESTEEM project, which began in 2016, created more than 40 hours of multimedia statistics modules for university instructors to use as-needed in courses for pre-service, secondary math teachers. Her current National Science Foundation-funded InSTEP project builds on seven dimensions of teaching statistics and data science data and statistical practices, central statistical ideas, augmentation, tasks, assessment, data and technology that have been proven to make for good learning environments.

Lee noted that, throughout her career, her most joyful moments have always been working with students and teachers. From watching teachers reflect on their practice and work together to improve their pedagogy to engaging with students as they dig into data and begin to make sense of it.

As she encouraged educators and future educators to think about how they will approach different problems in education and daily lives in relation to learning and teaching statistics, she reminded them to have faith in their students abilities and to be open to learning right alongside them.

Teaching with data can be scary because you do have to say that youll be a learner along with them. Youre there thinking really hard in the moment about what that next question might be. That can be scary or thrilling, Lee said.

Read the rest here:

'I Want The Folks in Our Society to Be Data Literate So That We Are Making Good Decisions Together for the Good of the World,' Says Professor...

Read More..

Pandemic oversight board to preserve data analytics tools beyond its sunset date – Federal News Network

The Pandemic Response Accountability Committee got started last year borrowing on what worked more than a decade earlier, when the Recovery Accountability and Transparency Board oversaw more than $480 billion in stimulus spending following the 2008 recession.

But the PRAC, which will continue to operate until the end of September 2025, is learning its own lessons overseeing more than $5 trillion in COVID-19 spending.

Aside from the PRAC overseeing more than six times more stimulus spending than what Congress authorized to recover from the 2008 recession, speed and urgency also factored into how agencies administered COVID-19 programs.

In light of these circumstances, the PRAC, in a report issued last month, documented five main takeaways from how agencies disbursed pandemic spending:

Former PRAC Deputy Executive Director Linda Miller, now a principal with Grant Thornton, said the urgency of pandemic reliefput that spending at a higher risk of fraud, waste and abuse.

Recovery spending was to try to recover from a recession, and it was a lot of shovel-ready construction projects that had timeframes that were well-established. This was disaster aid, this was more similar to a post-disaster, like a hurricane. Money quickly went out the door, and disaster aid is inherently riskier, because youre in a situation where, because people are in dire circumstances, youre willing to lower the guardrails when it comes to controls, and people are more likely to exploit that bad situation to take advantage of unwitting recipients, Miller said.

PRAC Executive Director Robert Westbrooks told the Federal Drive with Tom Temin that the speed of payments also made it difficult for agencies to prioritize funding for underserved communities.

The Small Business Administration, for example, was supposed to collect demographic data and prioritize underserved communities as part of the Paycheck Protection Program, but its inspector general found the agency wasnt initially meeting those goals.

SBA, however, made underserved businesses a priority in subsequent rounds of PPP spending.

The initial rules were first come, first served. Well, that certainly gives the advantage to folks that have established relationships with national lenders that were responsible for most of the PPP loans and it disadvantages underserved communities, Westbrooks said.

The PRAC, however, is ensuring it does something that didnt happen under the Recovery Board make sure its library of data sets and analytics tools still have a home beyond its sunset date.

Miller said the PRAC plans to turn its Pandemic Analytics Center of Excellence (PACE) over to the Council of the Inspectors General on Integrity and Efficiency, ensuring that the committee has a lasting impact after itdisbands in 2025.

The Recovery Board didnt find a permanent home for its Recovery Operation Center (ROC), which resulted in the loss of analytical capabilities oncethe board disbanded in 2015.

For many of us in the oversight community, we wanted Treasury or somebody to take over the ROC, because here was all this existing infrastructure, a lot of data was already there, but nobody really had the interest or the funding to take it over. And so we wanted to make sure, when we started the PRAC, that we were not going to have a similar situation, Miller said.

The Government Accountability Office found the Treasury Department had the authority to receive ROC assets after the Recovery Board sunset date. But GAO said Treasury had no plans to transfer ROCs hardware and software assets, citing cost, lack of investigative authority, and other reasons.

While OIGs with the financial resources to do so may pursue replication of the ROCs tools, the ROCs termination may have more effect on the audit and investigative capabilities of some small and medium-sized OIGs that do not have the resources to develop independent data analytics or pay fees for a similar service, according to some OIG officials, GAO wrote.

PACE, however, is more than just ROC 2.0, and has analytics capabilities, algorithms and models developed for specific types of fraud, waste and abuse that can be leveraged by agency IGs.

These tools not only empower IGs, but also nonprofits and individuals who can tip off agency watchdogs about red flags. Former Recovery Board Chairman Earl Devaney said in an interview last year that empowering citizen watchdogs helped IGs oversee stimulus spending

Miller said the PRAC has a similar mission of making agency oversight more evidence-based and more data-driven.

Being able to tap into the power of the data science community writ large whether thats in the private sector or academia or even a college student thats interested in the data PRAC absolutely encourages the use of those data sets, and to share anything that has been identified, Miller said.

The PRAC report highlights the importance of agencies using existing federal data sources to determine benefits eligibility, but the committee is also taking steps to improve the quality of its own data on COVID-19 spending recipients.

The American Recovery and Reinvestment Act Congress passed in 2009 required recipients to submit data that went directly to the Recovery Board, which conducted data analysis and also followed up with recipients that didnt submit adequate data.

The result was a really impressive data set that the Recovery Board had, and I think many people thought, Well, thats whats going to happen now. The PRAC is going to be created and theyre going to have a similar data set to what the Recovery Board had, Miller said.

Less than two weeks after Congress passed the CARES Act, however, OMB issued guidance that directed agencies to report all CARES Act spending through existing channels on USASpending.gov. Miller said the PRAC members disagreed with OMBs decision, which went against best practices learned from the Recovery Board.

We believed that was not going to provide the level of transparency that we required to provide through the CARES Act, and weve raised that with OMB on multiple occasions. They felt the recipient reporting burden was too significant to create a separate portal, she said.

The PRAC commissioned a report that found significant reporting gaps in the data available to the PRAC. Miller said the committee conducted its own independent analysis and found about 40,000 awards whose descriptions just said CARES Act.

Miller said the PRACs reliance on USASpending.gov requires the committee to comb state government websites and other sources of reliable pandemic spending data. She said this patchwork quilt process of pulling data from a variety of sources still continues at the PRAC.

Its a time-consuming process for an organization that only has about 25 people on its staff.

What were really trying to do is cobble together something that gets as much data as possible to the public on PandemicOversight.gov, Miller said.

Read this article:

Pandemic oversight board to preserve data analytics tools beyond its sunset date - Federal News Network

Read More..

Increase the Readability of Your Python Script With 1 Simple Tool – Built In

One of the biggest challenges beginning programmers have when learning a new language is figuring out how to make their code more readable. Since collaboration is a critical part of the modern working world, its vital we ensure other people can easily understand and make use of our code. At the same time, beginning programmers are struggling to figure out how to make their code work, and figuring out how to make it user-friendly seems like an added hurdle you just dont have time for.

Ive been there, I get it.

Fortunately, there are some pretty simple steps you can take to write clearer code. One of the main ways to make Python code more readable is by collapsing multiple lines of code into a single line. There are many ways to do this so well dive into one particular method: list comprehensions. The best part about this process is, since this is a standard method, other Python programmers reviewing your code will be able to quickly understand what youre doing.

List comprehensions are a Python tool that allow you to create a new list by filtering the elements in your data set, transform the elements that pass the filter and save the resulting listall in a single line of code.

Clarify Your Code5 Ways to Write More Pythonic Code

List comprehensions are a tool that allow you to create a new list by filtering the elements in your data set, transformthe elements that pass the filter and save the resulting listall in a single line of code.

But before we dive into that, lets take a second to think about what code we have to write to accomplish that without list comprehensions.

First we need to create an empty list to fill later.

Then we need to write a for loop to iterate through each value in our data set.

Then we need to write an if statement to filter the data based on a condition.

And finally, we need to write another statement to append the resulting data into the list.

Lets take a look at the code to see what this looks like. Imagine we have a list of 10 numbers and we want to identify and save each number greater than the mean value.

That example code first creates a list called Numbers which contains our data set, then executes the four steps outlined above to create the resulting list. What do you think the result will be?

When youre ready to check your answer, here it is: [6, 7, 8, 9, 10].

This takes four lines of code. Four lines isnt an unmanageable amount of code but if you can write it in a single line that others easily understand, why not?

More on Python ListsHow to Append Lists in Python

The general structure of a list comprehension is as follows:

[Function for Value in DataSet if Condition]

Written in plain English, this says, Execute this Function on each Value in Data Set that meets a certain Condition.

Function is how you want to modify each piece of data, which you may not want to do! Modifications arent necessary and you can use list comprehensions to store values without modifying them.

Value is an iterator we use to keep track of the particular data value tracked on each pass through the for loop.

DataSet is the data set youre analyzing in the list comprehension.

Condition is the condition the data must meet to be included.

To map those terms to the code in our previous example, we have:

Function: Number. Since were only storing the data without modification we store the iterator without calling a function.

Value: Number. This is the iterator name we used in the previous example, so we use it again here. Note: For this example, the term used in Function and Value must be the same because our goal is to store Value.

DataSet: Numbers. This was the list we used as our data set in the previous example and were going to use it the same way here.

Condition: if Number > sum(Numbers)/len(Numbers). This if statement identifies numbers that are greater than the mean of the data set and instructs the list comprehension to pass those values.

Heres how it looks written as a single list comprehension:

The result from executing this code is [6, 7, 8, 9, 10]. Its the same result while written with a single line of code using a structure that other coders will easily understand.

Weve focused on lists but this method can be applied to dictionaries, too (FYI, if youre new to coding, you may see dictionary commonly abbreviated as dict.) We only need to make a few slight changes to the syntax that correspond to the different syntax used for those data structures.

The only difference between sets and lists, syntactically, is that sets use curly brackets instead of the square brackets we use for lists. A set comprehension looks like this:

Notice how there are only two differences here. Numbers with curly brackets instead of square brackets, which makes it a set instead of a list. We also surround the comprehension creating Result with curly brackets instead of square brackets, which makes it a set comprehension instead of a list comprehension. Thats it.

We get the same result in set form instead of list form: {6, 7, 8, 9, 10}.

Learn More With Peter GrantLearn the Fundamentals of Control Flow in Python

There are two differences between list comprehensions and dictionary comprehensions, both of which are driven by the requirements of dictionaries.

First, dictionaries use curly brackets instead of square brackets so the list comprehension structure must use curly brackets. Since this is the same requirement for set comprehensions, if you start by treating a dictionary like a set, youre halfway there.

The second difference is driven by the fact that dictionaries use key: value pairs instead of only values. As a result, you have to structure the code to use key: value pairs.

These two changes lead to a structure that looks like this:

This does yield a slightly different result, because now the output is a dictionary instead of a list or set. Dictionaries have both keys and values, so the output has both keys and values. This means that the output will be: {6: 6, 7: 7, 8: 8, 9: 9, 10:10}.

You may recall I said you can apply functions to these values as you process them. We havent done that yet but its worth taking a moment to consider it now.

So how do we go about applying functions? We simply add their description to the Function part of the code.

For example, if we want to calculate the square of values instead of simply returning the value we use the following code:

You could apply any number of other functions, making list comprehensions a flexible and powerful way to simplify your code.

Keep in mind the purpose of list comprehensions is to make your code easier to read. If your list comprehension makes it harder to read then it defeats the purpose. List comprehensions can become difficult to read if the function or the condition are too long. So, when youre writing list comprehensions keep that in mind and avoid them if you think your code will be more confusing with them than without them.

People learning a new programming language have enough of a challenge figuring out how to make their code work correctly, and often dont yet have the tools to make their code clear and easy to read. Nevertheless, effective collaboration is vital to the modern workplace. Its important we have the necessary tools to make our code readable to those without a coding background. List comprehensions are a standard Python tool you can use to make your code simpler to read and easier for your colleagues to understand.

This article was originally published on Python in Plain English.

Continued here:

Increase the Readability of Your Python Script With 1 Simple Tool - Built In

Read More..