Page 1,090«..1020..1,0891,0901,0911,092..1,1001,110..»

Aspartame is a possible carcinogen: the science behind the decision – Nature.com

Aspartame is used to sweeten thousands of food and drink products.Credit: BSIP SA/Alamy

The cancer-research arm of the World Health Organization (WHO) has classified the low-calorie sweetener aspartame as possibly carcinogenic.

The International Agency for Research on Cancer (IARC) in Lyon, France, said its decision, announced on 14 July, was based on limited evidence for liver cancer in studies on people and rodents.

However, the Joint FAO/WHO Expert Committee on Food Additives (JECFA) said that recommended daily limits for consumption of the sweetener, found in thousands of food and drink products, would not change.

There was no convincing evidence from experimental or human data that aspartame has adverse effects after ingestion, within the limits established by previous committee, said Francesco Branca, director of the WHOs Department of Nutrition and Food Safety, at a press conference on 12 July in Geneva, Switzerland.

The new classification shouldnt really be taken as a direct statement that indicates that there is a known cancer hazard from consuming aspartame, said Mary Schubauer-Berigan, acting head of the IARC Monographs programme, at the press conference. This is really more of a call to the research community to try to better clarify and understand the carcinogenic hazard that may or may not be posed by aspartame consumption.

Other substances classed as possibly carcinogenic include extracts of aloe vera, traditional Asian pickled vegetables, some vehicle fuels and some chemicals used in dry cleaning, carpentry and printing. The IARC has also classified red meat as probably carcinogenic and processed meat as carcinogenic.

Aspartame is 200 times sweeter than sugar and is used in more than 6,000 products worldwide, including diet drinks, chewing gum, toothpaste and chewable vitamins. The US Food and Drug Administration (FDA) approved it as a sweetener in 1974 and, in 1981, the JECFA established an acceptable daily intake (ADI) of 40 milligrams per kilogram of body weight. For a typical adult, this translates to about 2,800 milligrams per day equivalent to 914 cans of diet soft drinks.

The artificial sweetener has been the subject of several controversies over the past four decades linking it to increased cancer risk and other health issues. But re-evaluations by the FDA and the European Food Safety Authority (EFSA) have found insufficient evidence to reduce the ADI.

In 2019, an advisory group to the IARC recommended a high-priority assessment of a range of substances, including aspartame, on the basis of emerging scientific evidence. The IARCs evidence for a link between aspartame and liver cancer comes from three studies that examined the consumption of artificially sweetened beverages.

One of these, published online in 2014, followed 477,206 participants in 10 European countries for more than 11 years and showed that the consumption of sweetened soft drinks, including those containing aspartame, was associated with increased risk of a type of liver cancer called hepatocellular carcinoma1. A 2022 US-based study showed that consumption of artificially sweetened drinks was associated with liver cancer in people with diabetes2. The third study, involving 934,777 people in the US from 1982 to 2016, found a higher risk of pancreatic cancer in men and women consuming artificially sweetenedbeverages.

These studies used the drinking of artificially sweetened beverages as a proxy for aspartame exposure. Such proxies are quite reliable, but do not always provide a precise measure of intake, says Mathilde Touvier, an epidemiologist at the French National Institute of Health and Medical Research in Paris.

Touvier co-authored another study included in IARCs assessment, which considered aspartame intake from different food sources including soft drinks, dairy products and tabletop sweeteners. The study found that among 102,865 adults in France, people who consumed higher amounts of aspartame (but less than the recommended ADI) had an increased risk of breast cancer and obesity-related cancers3.

The study shows a statistically significant increased risk, robust across many sensitivity analyses, says Touvier. But it hasnt had enough statistical power to investigate liver cancer for the moment .

The JECFA also evaluated studies associating aspartame with liver, breast and blood cancers but said that the findings were not consistent. The studies had design limitations, couldnt rule out confounding factors, or relied on self-reporting of daily dietary aspartame intake.

Dietary records are not always the most reliable. We arent just ingesting aspartame as a single agent. Its part of a combination of chemicals and other things, says William Dahut, chief scientific officer of the American Cancer Society, who is based in Bethesda, Maryland.

In the body, the sweetener breaks down into three metabolites: phenylalanine, aspartic acid and methanol. These three molecules are also found from the ingestion of other food or drink products, says Branca. This makes it impossible to detect aspartame in blood testing. Thats a limitation of our capacity to understand its effects.

Methanol is potentially carcinogenic because it is metabolised into formic acid, which can damage DNA. If you have enough methanol, it damages your liver and there's a risk of liver cancer, says Paul Pharoah, a cancer epidemiologist at Cedars-Sinai Medical Center in Los Angeles. But the amount of methanol generated by aspartame breaking down is trivial, he adds.

More studies are needed to explore aspartames impact on metabolic processes, as well as its links to other diseases, the IARC says. This research will also bring new pieces of evidence to the global picture, adds Touvier.

The rest is here:

Aspartame is a possible carcinogen: the science behind the decision - Nature.com

Read More..

Privacy and Legal Concerns Surrounding the UK’s Online Safety Bill – Lexology

The UKs Online Safety Bill

A heated debate surrounds a crucial aspect of the UKs Online Safety Bill (The Bill): whether the pursuit of greater protections against child sexual abuse material (CSAM) justifies compromising individual privacy in relation to private messages. While the aim to combat CSAM is undoubtedly important, critics argue that the bills provision to scan end-to-end encrypted messages undermines the privacy rights of users.

What is End-to-End Encryption?

End-to-end encryption is a security method that offers users a greater level of privacy and security while exchanging private messages. This encryption technique ensures that only authorized recipients can access and decode the messages shared, protecting the content of the messages from any unauthorized access by third parties, including service providers (Like Google and WhatsApp) and governmental entities. By encrypting data on the senders device and decrypting it on the receivers device, end-to-end encryption prevents messages from being intercepted as well as eliminating the vulnerabilities associated with storing confidential information on servers. This technology allows users to communicate confidently, knowing that their conversations, personal details, and digital interactions remain shielded from prying eyes. With the implementation of end-to-end encryption, messaging applications such as Signal and WhatsApp have upheld the fundamental right to privacy of users within the digital landscape.

Why is The Online Safety Bill So Contentious?

Proponents of stricter measures argue that the prevalence of CSAM necessitates proactive action by tech companies and regulatory authorities. They contend that scanning encrypted messages can help identify and remove illegal content, potentially saving victims from further harm. However, opponents in tech raise concerns about the potential erosion of privacy and the broader implications The Bill will have for digital rights. In an open letter, opponents of The Bill argued that Proponents say they appreciate the importance of encryption and privacy while also claiming that its possible to surveil everyones messages without undermining end-to-end encryption. The truth is that this is not possible the letter reads.

Apple, Signal and WhatsApps Argument

In a recent statement to the media Apple has joined the chorus of voices against The Bill saying that End-to-end encryption is a critical capability that protects the privacy of journalists, human rights activists, and diplomats,. Other prominent end-to-end encrypted messaging apps, including Signal and WhatsApp, have also taken a firm stance against the Online Safety Bill. WhatsApps head, Will Cathcart, stated that the platform would refuse to comply with any legal requirement to undermine its encryption. Similarly, Signal President, Meredith Whittaker warned that the secure messaging platform would rather quit the UK than compromise the security and privacy of its users.

The Online Safety Bill also carries legal implications for non-compliant companies. Failure to adhere to the bills requirements could result in substantial fines, and senior executives may face imprisonment under the expanded criminal liability provisions. The inclusion of clauses that allow Ofcom to compel communications providers to take action to prevent harm to users has also received criticism from tech companies.

Conclusion

The encryption-busting Online Safety Bill has provoked a fierce backlash within the tech industry, as concerns grow over the potential loss of secure messaging apps from the UK. Tech giants like Apple have expressed their reservations, emphasizing the critical role of end-to-end encryption in protecting user privacy. With the bills passage into law anticipated this summer, the debate surrounding privacy, security, and the balance between law enforcement and individual rights continues to intensify.

Taylor Hampton Solicitors is an award-winning London based law firmrecognised as a leader in defamation, privacy, phone hacking and internet litigation.Whilst distinguished for our work in media and internet law, our practice also focuses on UK immigration and visa services and Australian migration.

Please visit our website athttps://taylorhampton.co.ukand contact us at[emailprotected]or 00444275970 for further information on our professional services.We offer a preliminary consultation without obligation.

Visit link:
Privacy and Legal Concerns Surrounding the UK's Online Safety Bill - Lexology

Read More..

Faculty Openings, Teaching-stream Positions (All Ranks) job with … – Times Higher Education

The School of Data Science (SDS) at The Chinese University of Hong Kong, Shenzhen (CUHK-SZ) is now inviting qualified candidates to fill multiple teaching-stream faculty positions.The primary duties are teaching of courses offered by School of Data Science in its multiple programmes.

Applicants should hold or expect to obtain a Ph.D. degree in one or more of the following areas: Computer Science, Operations Research, Data Science, Machine Learning and Artificial Intelligence, Statistics, Management Science, and other closely related areas. Junior applicants must demonstrate a clear and high potential of teaching excellence. Senior applicants are expected to demonstrate an established record of teaching accomplishments, relevant academic activity, and leadership.

Applications from overseas with international experience are particularly welcomed. We also encourage applications from under-represented or disadvantaged groups in the scientific community.

The School offers a very competitive package including significant salary and a life-long career development path from Assistant Professor (Teaching) to Full Professor (Teaching). Critical contributions of teaching stream faculty are valued in the School; in particular, some of them can join the SDS leadership team.

Although not mandatory and if so wished, teaching-stream faculty are welcomed to join ongoing research projects of the School, particularly in collaboration with industry partners and other research units of the university.

Interested individuals should apply online athttp://academicrecruit.cuhk.edu.cn/sds

The application packages should include a cover letter, a curriculum vitae, a teaching statement, and prior teaching evaluations (if any). In addition, applicants should provide names, titles, and emails of at least three references in the system. If you have any questions, please send an email totalent4sds@cuhk.edu.cn.

Applications/Nominations will be considered until the posts are filled.

About the School of Data Science at The Chinese University of Hong Kong, Shenzhen:

The School of Data Science (SDS) of The Chinese University of Hong Kong, Shenzhen is established in July 2020. Located in Shenzhen, the innovation hub of China, the SDS focuses on first-class teaching and academic research in Data Science. It has established a systematic education system in data science, including theoretical aspects such as operations research, statistics, computer science, and application fields such as machine learning, operations management, and decision analytics, providing students with comprehensive and state-of-the-art training. With the aim "to nurture high-end talent with global perspective, Chinese tradition and social responsibility", the school is organically combining industry, education and research, determined to become the world's leading data science innovation and research base, as well as cultivating top innovative talents with a global perspective.

The SDS is established on a solid foundation with a strong faculty team. Currently it has more than 60 faculty members, many of whom have experiences in working in top-tier universities in the world and have significant international impact in related fields of academia and industry.

The establishment of the SDS represents the increasing investment of the Chinese University of Hong Kong, Shenzhen in the field of data science, which is also a sign of the determination of The Chinese University of Hong Kong, Shenzhen to stand at the forefront of the era and to cultivate the talents needed for the development of the society.

Visit link:

Faculty Openings, Teaching-stream Positions (All Ranks) job with ... - Times Higher Education

Read More..

Safeguarding Your Privacy With Encrypted Apps to Thwart … – Innovation & Tech Today

In an Orwellian era marked by ever-increasing digital surveillance, many law-abiding citizens are increasingly concerned about their privacy. The revelations by whistleblowers Edward Snowden and William Binney shed light on pervasive government surveillance and corporate surveillance by the tech giants may be even more widespread.

As a response to these abuses, the development and adoption of encrypted apps has gained momentum, providing individuals with a layer of privacy and security. In this article, well cover just a few of your best options to protect your online privacy.

Signal, endorsed by privacy advocates and experts worldwide, has emerged as the gold standard for secure messaging. This open-source app encrypts your conversations end-to-end, ensuring that only the intended recipient can decipher the messages. Signal also boasts features like self-destructing messages, verification codes, and secure voice and video calls. Its simplicity, robust encryption protocols, and wide adoption make it an ideal choice for safeguarding your private conversations.

When it comes to securing your email communications, ProtonMail stands out as a reliable option. Offering end-to-end encryption, ProtonMail ensures that your messages remain inaccessible to unauthorized entities. The app enables you to send encrypted messages to non-ProtonMail users, further expanding its reach. ProtonMails emphasis on privacy, coupled with user-friendly features, makes it a popular choice for those seeking to shield their email communications from prying eyes.

For users concerned about their online activities being tracked and monitored, the Tor browser provides a valuable solution. By routing your internet traffic through a network of encrypted relays, Tor conceals your identity and location, effectively shielding you from prying eyes. Whether youre accessing sensitive information, evading censorship, or simply desiring online anonymity, the Tor browser offers a powerful tool to protect your privacy during web browsing.

A Virtual Private Network (VPN) is a crucial tool for protecting your online activities from surveillance. NordVPN, a widely recognized and trusted VPN service, encrypts your internet traffic and routes it through remote servers, shielding your data from prying eyes. With an extensive network of servers worldwide, NordVPN offers robust security and privacy features, allowing you to browse the web, access geo-restricted content, and engage online without compromising your privacy.

ProtonVPN, developed by the same team behind ProtonMail, combines security, privacy, and user-friendliness. With a focus on strong encryption, a strict no-logs policy, and support for advanced security protocols, ProtonVPN ensures that your internet traffic remains shielded from surveillance. Its intuitive interface and various subscription plans cater to both casual users and privacy enthusiasts, making it a top choice for those seeking a VPN service with ease of use.

In an age where violations of privacy by governments and corporations have become pervasive, encrypted apps provide a formidable line of defense.. By harnessing the power of these tools, you can regain control over your digital persona. If you are among those who still value their privacy, even in the digital age, you must stay vigilant by adopting privacy-focused technologies, and reclaim your rights.

Read more from the original source:
Safeguarding Your Privacy With Encrypted Apps to Thwart ... - Innovation & Tech Today

Read More..

Where Does AI Happen? – KDnuggets

Partnership Post

By Connor Lee, Incoming NYU Computer Science Student

With the leap in AI progress making shockwaves throughout mainstream media since November 2022, many speculate their jobs will be taken over by their AI counterparts. One profession, however, cannot be possibly replaced: the researchers advancing deep neural networks and other machine learning models the humans behind the AI. Although research is traditionally done within university walls, AI is by no means a traditional research field. A sizable portion of AI research is done in industrial labs. But which sector should aspiring researchers flock toward? Academia or industry?

Academia is more inclined to basic fundamental research while the industry is inclined to user-oriented research driven by the large data access, says Nitesh Chawla, a Professor of Computer Science and Engineering at the University of Notre Dame. Prof. Chawla points to the pursuit of knowledge as a separating factor between industrial and academic AI research. Within the industry, research is tied to a product, advancing towards a better society-- while within academia, the pursuit of pure discovery drives research breakthroughs. The seemingly endless academic freedom does not come without its drawbacks, academia does not have the data nor the computing access available, according to Prof. Chawla.

For aspiring young researchers, the choice seems simple: the private sector has everything they could want. Vast, autonomous, commercial organizations striving toward innovation while supported by readily available data, computing power, and funding. This led to a perception that the industry is stealing talent away from academia. Academics, naturally, complain. A study published in 2021 by a team from Aalborg University pointed out that increasing participation of the private sector in AI research has been accompanied by a growing flow of researchers from academia into industry, and especially into technology companies such as Google, Microsoft, and Facebook.

As expected, industrial researchers disagree. When I hire for my team, I want top talent, and as such Im not poaching academic talent, but rather I am trying to help them get industry awards, funding from industry, and have their students as interns, explains Dr. Luna Dong, a Principal Scientist at Meta who is the head scientist working on Metas smart glasses. She sees a glaring difference between industry and academia, which could be credited to the fundamental way research is conducted. According to Dr. Dong, AI research within an industry is conducted by knowing what the end product should look like and reverse engineering a path toward it. In contrast, academics, having a promising idea, continuously construct various paths, not knowing where those paths would lead.

Yet, despite these contrasts, Dr. Dong believes the industry helps academia and vice versa, lots of industry breakthroughs are inspired by applying the research from academia on real use-cases. Likewise, Computer Science Professor Ankur Teredesai from the University of Washington, Tacoma, describes the relationship between industry and academia as supporting each other, symbiotic is the word that comes to mind. As he views it, research practices have evolved into academics shifting their agenda to aid industry products -- a good example of that shift would be joint positions within major corporations that some prominent professors are holding.

Regardless of their affiliations, the data science community converges together a few times a year at conferences. Prof. Chawla describes them as a wonderful melting pot. Some conferences are traditionally more academic, some purely industrial but some are a perfect blend of both. Prof. Chawla points to KDD, or the Special Interest Group on Knowledge Discovery and Data Mining, a conference known for such a connection. KDD maintains two parallel peer-reviewed tracks: the research track and the applied data science (ADS) track. As put by Dr. Dong, who was the ADS Program Co-Chair at KDD-2022, KDD is helpful by providing a forum for researchers and practitioners to come together to listen to the talks and discuss the techniques while inspiring each other. KDD is a place where we break the barriers of communication and collaboration, where we demonstrate how data science and machine learning advances with industry consumption.

This is the mindset that drove KDD from its early days. One of the things we wanted to do from the very beginning was to create a conference where applications were well represented, commends Prof. Usama Fayyad, Executive Director of the Institute for Experiential AI at Northeastern University and a former Chief Data Officer of Yahoo, who together with Dr. Gregory Piatetsky-Shapiro co-founded the KDD conference in 1995. Prof. Fayyad believes that if AI conferences were only focused on academics, it would be a big miss due to the collective desire to prove research on real problems and motivation to drive new research based on emerging data sets.

However, opening up KDD to the industry also had its challenges. With the research track being rightfully dominated by academia-originated work, the ADS track should have been primarily dedicated to applied studies coming from industrial research labs. In reality, more than half of ADS publications have their origins within academia or are a result of strong academic-industrial collaboration. A decade ago, Prof. Fayyad realized that many interesting AI applications were developed by teams that were simply too busy to write papers. He led KDD into its current phase, where KDD organizers venture and curate distinguished invited talks given by top industrial practitioners. The ADS invited talks have quickly become the highlight of the conference.

The KDD Cup competition held annually in conjunction with the KDD conference, is yet another way to connect the academic and industrial worlds. KDD Cup is a way to attract both industry and academia participants where companies bring some of the challenges that they are comfortable sharing, while academics get to work on data they would never have access to, describes Prof. Teredesai, who is also the CEO of a health tech company CueZen. Each year, a novel task is introduced and a new dataset is released. Hundreds of teams sprint towards the most effective solution, competing for prizes and fame. Prof. Fayyad agrees, It's been a very healthy thing for the field because we see participation from academia, students diving in, or even companies teaming together.

Circling back to the choice between industry and academia, it will soon become irrelevant. With academic courses taught by practitioners, professors leading industrial labs, global cloud computing resources becoming dominant, and more data becoming available, the academic-industrial boundaries are quickly getting blurred in the AI domain. No need to stick to any of the two sectors, just choose the project you are most excited about!

Connor Lee is a 2023 graduate from Saratoga High School in the Bay Area. He will be joining the Computer Science program at NYU in the fall. By all means, Connor will be one of the youngest KDD attendees ever!

Link:

Where Does AI Happen? - KDnuggets

Read More..

What Does the Patchless Cisco Vulnerability Mean for IT Teams … – InformationWeek

On July 5, Cisco released a security advisory warning users of a vulnerability in the Cisco ACI Multi-Site CloudSec encryption feature of Cisco Nexus 9000 Series Fabric Switches in ACI mode.

The networking and cybersecurity solutions company has no plans to release software updates to address the vulnerability, and there are no workarounds. IT teams are now faced with responding to a patchless vulnerability.

The vulnerability (CVE-2023-20185) impacts Cisco Nexus 9000 Series Fabric Switches in Application Center Infrastructure (ACI) mode that run releases 14.0 and later, specifically if the data switching gear is a part of Multi-Site topology and uses the CloudSec encryption feature, according to the security advisory.

The high-severity vulnerability could allow sensitive user and company data to be read, modified, or exploited by bad actors that are intercepting encrypted traffic and/or using cryptanalytic techniques to break the encryption, George Gerchow, CSO and SVP of IT at SaaS analytics platform Sumo Logic, tells InformationWeek. He is also on the faculty with the cybersecurity research firm Institute for Applied Network Security (IANS).

Successful exploitation of this vulnerability could have wide-ranging consequences. In addition to manipulation of traffic between ACI sites, bad actors could leverage the vulnerability to lead to broader security breaches. If attackers gain unauthorized access to the network through this vulnerability, it could potentially open pathways for further exploitation or lateral movement within the network, explains Callie Guenther, senior managerof cyber threat research at managed detection and cybersecurity companyCritical Start.

Thus far, Ciscos Product Security Incident Response Team (PSIRT) has not found any indication that the vulnerability has been exploited, according to the security advisory and an emailed statement.

The company recommends that customers using its ACI Multi-Site CloudSec encryption feature on certain Nexus Series Switches and Line Cards immediately disable the feature. The security advisory includes directions on how to determine the status of the CloudSec feature. The company recommends users reach out to their Cisco support organization to talk about alternatives.

The lack of patch and workaround for the vulnerability is not typical, and it likely indicates a complex issue, according to Guenther. It signifies that the vulnerability may be deeply rooted in the design or implementation of the affected feature, she says.

With no workarounds or forthcoming patch, what can IT teams do in response to this vulnerability?

Before taking a specific action, IT teams need to consider whether this vulnerability impacts their organization. I have seen companies go into a panic, only to find out that a particular issue didnt really affect them, says Alan Brill, senior managing director in theKrollCyber Risk Practice and fellow of the Kroll Institute, a risk and financial advisory solutions company.

When determining potential impact, it is important for IT teams to take a broad view. The vulnerability may not directly impact an organization, but what about its supply chain? Third-party risk is an important consideration.

If an IT team determines that the vulnerability does impact their organization, what is the risk level? How likely is threat actor exploitation?

In some cases, the risk may be small enough that it does not require a response. Document your decision and thinking to demonstrate that an analysis was done and to show that a decision not to respond to the particular problem was a reasonable one, Brill recommends.

In other cases, Cisco customers will need to act. This may mean disabling the function and considering alternatives, but these responses are not without complications.

The feature in question could be critical to an organizations network infrastructure function. Disabling it could mean operational disruptions and limited network functionality.

Once the feature is disabled, IT teams may need to find alternate configurations to address the loss of functionality. This might involve reconfiguring network paths, adjusting security policies, or implementing alternate encryption mechanisms, says Guenther. Such reconfigurations can be complex and time-consuming, especially in large-scale environments with intricate network architectures.

Disabling the feature and introducing an alternative configuration will require impact assessment and testing. How will disabling the feature impact network performance and security? Will an alternative introduce new potential risks?

Disabling the CloudSec encryption provides potential access in clear text to organizational data, a risk that malicious actors are now aware of and may seek to exploit, says Gerchow.

While a patchless vulnerability may stand out, it is likely that it will happen again. Given the complexity of the software -- and the embedded code can be the source of problems for a lot of packages -- I think its really a matter of when it happens again, not if it will ever happen again, says Brill.

Gerchow argues that IT leaders should push for a move to SaaS and public cloud solutions. The lack of a patch or workaround from Cisco leaves customers in a vulnerable position, whereas SaaS and public cloud providers bear the responsibility for maintaining the security of the infrastructure, he says.

IT teams will inevitably need to address other software vulnerabilities, whether they can be patched or not, in the future. Strengthening an organizations security posture, understanding risk, communicating with vendors, and having a strong incident response plan in place can help them prepare for the next one.

Having a plan, having managements backing, and understanding and carrying through on the plan is the best solution when faced with this kind of problem, says Brill.

Microsoft Discloses 5 Bugs in Active Exploit; Only Patches 4

Barracuda Zero-Day Vulnerability: Mandiant Points to Chinese Threat Actors

Cisco CIO Fletcher Previn on the Hybrid Workplace & Exploring AI

The rest is here:
What Does the Patchless Cisco Vulnerability Mean for IT Teams ... - InformationWeek

Read More..

Synthetic Data Platforms: Unlocking the Power of Generative AI for … – KDnuggets

Creating a machine learning or deep learning model is so easy.. Nowadays, there are different tools and platforms available to not only automate the entire process of creating a model but to even help you to select the best model for a particular data set.

One of the essential things you need to solve a problem by creating a model is a dataset that contains all the required attributes describing the problem you are trying to solve.. So, suppose we are looking at a dataset describing the diabetes history of patients. There will be specific columns that are the significant attributes like age, gender, glucose level, etc. which play an essential role in predicting whether a person has diabetes or not. In order to build a diabetes prediction model, we can find multiple datasets that are publicly available. However, we may face difficulty in solving problems where data is not readily available or highly imbalanced.

Synthetic data generated by deep learning algorithms is often used in replacement of original data when data access is limited by privacy compliance or when the original data needs to be augmented to fit specific purposes. Synthetic data mimics the real data by recreating the statistical properties. Once trained on real data, the synthetic data generator can create any amount of data that closely resembles the patterns, distributions, and dependencies of the real data. This not only helps generate similar data but also helps in introducing certain constraints to the data, such as new distributions. . Let's explore some use cases where synthetic data can play an important role.

Generative AI models are crucial in synthetic data production since they are explicitly trained on the original dataset and can replicate its traits and statistical attributes. Models of generative AI, such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs), comprehend the underlying data and produce realistic and representative synthetic instances.

There are numerous open-source and closed source synthetic data generators out there, some better than others. When evaluating the performance of synthetic data generators, its important to look at two aspects: accuracy and privacy. Accuracy needs to be high without the synthetic data overfitting the original data and the extreme values present in the original data need to be handled in a way that doesnt endanger the privacy of data subjects. Some synthetic data generators offer automated privacy and accuracy checks - its a good idea to start with these first. MOSTLY AIs synthetic data generator offers this service for free - anyone can set up an account with just an email address.

Synthetic data is not personal data by definition. As such, it is exempt from GDPR and similar privacy laws, allowing data scientists to freely explore the synthetic versions of datasets. Synthetic data is also one of the best tools to anonymize behavioral data without destroying patterns and correlations. These two qualities make it especially useful in all situations when personal data is used - from simple analytics to training sophisticated machine learning models.

However, privacy is not the only use case. Synthetic data generation can also be used in the following use cases:

In order to generate synthetic data we may use different tools that are available in the market. Let's explore some of these tools and understand how they work.

For a comprehensive list of synthetic data tools and companies, here is a curated list with synthetic data types.

Now as we have discussed the pros and cons of using these above-described tools and libraries for synthetic data generation, now lets look at How we can use Mostly AI which is one of the best tools available in the market and easy to use.

MOSTLY AI is a synthetic data creation platform that assists enterprises in producing high-quality, privacy-protected synthetic data for a number of use cases such as machine learning, advanced analytics, software testing, and data sharing. It generates synthetic data using a proprietary AI-powered algorithm that learns the statistical aspects of the original data, such as correlations, distributions, and properties. This enables MOSTLY AI to produce synthetic data that is statistically representative of the actual data while simultaneously safeguarding data subjects' privacy.

Its synthetic data is not only private, but it is also simple to use and can be made in minutes. The platform has an easy-to-use interface powered by generative AI that enables organizations to input existing data, choose the appropriate output format, and produce synthetic data in a matter of seconds. Its synthetic data is a beneficial tool for organizations that need to preserve the privacy of their data while still using it for a number of objectives. The technology is simple to use and quickly creates high-quality, statistically representative synthetic data.

Synthetic data from MOSTLY AI is offered in a number of formats, including CSV, JSON, and XML. It can be utilized with several software programs, including SAS, R, and Python. Additionally, MOSTLY AI provides a number of tools and services, such as a data generator, a data explorer, and a data sharing platform, to assist organizations in using synthetic data.

Lets explore how to use the MOSTLY AI platform. We can start by visiting the link below and creating an account.

MOSTLY AI: The Synthetic Data Generation and Knowledge Hub - MOSTLY AI

Once we have created the account we can see the home page where we can choose from different options related to data generation.

As you can see in the image above on the home page we can upload the original dataset for which we want to generate synthetic data or just to try it out we can use the sample data. We can upload data as per your requirement.

As you can see in the image above, once we upload the data we can make changes in terms of what columns we need to generate and also set different settings related to data, training and output.

Once we set all these properties as per our requirement we need to click on the launch job button to generate the data and it will be generated in real-time. On MOSTLY AI, we can generate 100K rows of data every day for free.

This is how you can use MOSTLY AI to generate synthetic data by setting the properties of data as required and in real time. There can be multiple use cases according to the problem that you are trying to solve. Go ahead and try this with datasets and let us know how useful you think this platform is, in the response section.Himanshu Sharma is a Post Graduate in Applied Data Science from the Institute of Product Leadership. A self-motivated professional with experience working on Python Programming Language/Data Analysis. Looking to make my mark in the field of Data Science. Product Management. An active blogger with expertise in Technical Content Writing in Data Science, awarded as the Top Writer in the field of AI by Medium.

See the original post here:

Synthetic Data Platforms: Unlocking the Power of Generative AI for ... - KDnuggets

Read More..

Trais proposal to regulate OTT concerning, may threaten privacy, end encryption: Experts – The Economic Times

Over-the-top (OTT) communication services like Whatsapp, Signal or Telegram could be "over regulated" with the threat of encryption being compromised, said technology policy experts after the Telecom Regulatory Authority of India (Trai) proposed to regulate such services in its consultation paper released last week.OTT services are currently regulated under the IT Act, which will soon be replaced by the proposed Digital India Bill.Experts said currently the government is required to give a notice under Section 69(a) of the IT Act to track calls and if OTT services are regulated by Trai, it will make it much easier for the government to intercept calls.The firms may also have to do away with end-to-end encryption, which will be a risk to user privacy and threaten operations of firms like Whatsapp in India, experts said.OTTs are currently regulated under the IT Act and adding another regulator in the mix is likely to complicate issues. India will probably have to consider a collaborative digital regulation framework like the one that the UK has set up, said Rohit Kumar, founding partner of public policy research firm The Quantum Hub.

Last Friday, Trai had issued a consultation paper on the regulatory mechanism for OTT communication services as well as their selective banning on national security grounds.

Nikhil Narendran, partner at the TMT practice of law firm Trilegal, said, "Once a licensing regime is brought in, for OTT services, the whole architecture of these services may require change".

Telecom carriers have called the regulators discussion paper on regulating OTT players such as WhatsApp, Telegram and Signal, and on selective banning of apps during instances of civil unrest a progressive step and a backing of their concerns, ET reported last week.

Amrita Choudhury, president, Cyber Caf Association of India, said, licensing OTTs will not address telco issues and that there are other viable options to support the challenges faced by the telecom industry such as rationalising spectrum charges, etc.

Further, Section 69 of the IT Act already prescribes powers for monitoring and decryption of information, he said. Procedural and Safeguards for Interception and Decryption Rules, 2009 formulated under this provision, at least have some procedural safeguards for such orders, he explained.

Indicating a compromise of end-to-end encryption like national security, public order etc. may not satisfy the proportionality and necessity test stated by the Supreme Court in the first Puttaswamy judgement, he said. Given these reasons, it is important that OTT communication services are kept outside the jurisdiction of DoT and Trai, Rizvi said.

Waghre explained that the current approach seeks to resurrect a 'licence raj' and compromise citizen's ability to communicate privately instead of being progressive and protecting the rights of individuals in India.

Such an approach also fundamentally misunderstands one of the core tenets of communications securitythat you cannot have selective compromises. Once a system that intentionally introduces a vulnerability in end-to-end encryption/private communications is created, it can be exploited by anyone, he pointed out.

Read the original here:
Trais proposal to regulate OTT concerning, may threaten privacy, end encryption: Experts - The Economic Times

Read More..

A woman and her daughter plead guilty to abortion-related charges … – The Verge

A Norfolk, Nebraska, woman pleaded guilty to helping her daughter have a medication abortion last year. The charges came after Facebook, by court order, provided police with evidence that bolstered a Madison County prosecutors case against her.

Last year, it emerged that the two were charged after police acquired Facebook messages that proved the two had acquired abortion medication intended for first-trimester abortions. In a June 2022 affidavit (via Jezebel), the officer investigating Celeste Burgess, the daughter who was charged along with her mother, Jessica Burgess, said hed served Meta a warrant seeking their messages, and the company quickly complied.

The charges include having an abortion after 20 weeks, false reporting, and tampering with human skeletal remains. According to last years affidavit, Burgess was about 23 weeks along in her pregnancy, which is also later than the Nebraska 20-week post-fertilization abortion ban in place at the time. Nebraska has since implemented a 12-week abortion ban.

The case underscores a crucial privacy drawback of Facebook Messenger, which to this day doesnt default to end-to-end encryption (E2EE) like other messengers, such as Signal, Metas own WhatsApp, or Apples iMessage do. Because its not the default, average people not being intentional about their messaging may not realize they can even turn it on.

E2EE is important because, when its properly implemented, the company offering it has no key to unlock the messages the only person who can access the messages is the sender and the receiver, and in some cases, you can even set the messages to be deleted.

In June, when the investigating officers affidavit was filed, the Supreme Court was on the precipice of striking down Roe v. Wade which it did only nine days afterward on June 24th, 2022. Afterward, existing, unenforceable abortion bans around the country immediately took effect, while many states got to work passing new restrictions, and womens rights advocacy groups warned of digital privacy risks illuminated in cases just like the Burgess.

Meta itself has been reticent to take a stand on abortion. Although then-Meta executive Sheryl Sandberg posted in May 2022 in support of abortion rights, the next day, the company restricted internal discussion of the issue to one-on-one private chats with trusted colleagues or up to five like-minded people in listening sessions, though the company allowed its employees to share their thoughts on their personal Meta social apps.

The company also downweighted abortion content on its platforms well before the Supreme Court struck down the Roe v. Wade decision that had previously served as a barrier against strong abortion laws at the state level.

The mother and daughter are scheduled for sentencing on September 22nd and July 20th, respectively. Her daughter pleaded guilty in May. The Madison County prosecutor, attorney Joe Smith, said this was his first charge of illegal abortion after 20 weeks since the previous ban was instituted in 2010.

Update July 12th, 2023 10:27PM ET: Updated to credit the linked affidavit PDF to Jezebel, where it was published previously.

Read more from the original source:
A woman and her daughter plead guilty to abortion-related charges ... - The Verge

Read More..

Cyber Extortion Trends: Lessons from CL0P and MOVEit – Government Technology

Hacking group CL0Ps attacks on MOVEit point to ways that cyber extortion may be evolving, illuminating possible trends in who perpetrators target, when they time their attacks and how they put pressure on victims.

Malicious actors that successfully target software supply chains can maximize their reach, impacting the initial victims as well as their clients and clients clients. And Allan Liska, intelligence analyst at threat intelligence platform provider Recorded Future, noted that cyber extortion groups like CL0P have the money to buy zero-day vulnerabilities to compromise commonly used platforms.

Plus, perpetrators increasingly use threats to publish stolen data more so than file encryption to put pressure on victims and are exploring new ways of denying victims access to their data.

And other extortionists are likely watching the MOVEit incident play out and drawing their own takeaways.

With a lot of these, the first big attack, it gets the headlines, but these ransomware groups are learning at the same time, Hofmann said. They're seeing what worked well, what didn't, what tactics worked, and they're learning from each other. So, the next go-around is going to be different.

Groups like CL0P also appear to be putting attention on targeting widely used platforms and exploiting zero-day vulnerabilities.

The MOVEit compromise was CL0Ps third known attack on a file transfer service, each one netting more victims. Its 2020 Accellion exploit stole data from roughly 100 companies, while the hackers said their early 2023 attack on GoAnywhere impacted about 130 organizations, per Bleeping Computer. By early July, more than 200 organizations were believed to be affected by the MOVEit hacks, with data breaches affecting more than 17.5 million people, Emsisoft threat analyst Brett Callow told TechCrunch. Of course, hitting victims and getting money out of them are two separate matters.

Cyber criminals can buy zero-day vulnerabilities, said Liska. Paying six figures for zero days in top-name software like Microsoft Exchange may be too spendy for most, but many ransomware groups do have the money to shell out up to five figures to buy zero days in lower-profile, widely used platforms like MOVEit, he said.

You're not spending more than $100,000 and that. And as far as we can tell, CL0Ps made 100 times that at least from this particular attack, Liska said. So, in theory, if they reinvested all of that money, they could buy 100 more of these zero days to these types of platforms or more and still have money leftover to vacation in Sochi.

Still, organizations shouldnt forget about more traditional attack methods, Hofmann said. Roughly 90 percent of cyber extortionists still wage their attacks by taking advantage of unpatched Internet-facing systems, remote desktop protocol (RDP) connections where multifactor authentication (MFA) has yet to be implemented, or phishing and stolen credentials.

MOVEIt software creator Progress announced that the initially exploited vulnerability as well as one discovered a few weeks later took advantage of SQL injection vulnerabilities in the tool.

These are among the oldest forms of vulnerability and are the result of poor coding practices that are preventable, reported Ars Technica.

Federal efforts are underway to push software developers to design offerings with security baked in, thus improving overall safety of the software landscape.

Thats a good way to go, because a lot of these platforms that are heavily relied on are rickety, because they're not looked at they've been traditionally ignored by bad guys, and that picture is changing, Liska said.

Realizing that a secure-by-design vision could take decades, in the meantime, organizations should use a defense-in-depth approach to better protect themselves, Liska said.

In ransomwares early days, perpetrators encrypted files and demanded payment. But other methods may be gaining more popularity. A recent report found attackers increasingly pressuring victims by stealing their data and threatening to publish it, sometimes but not always pairing this with file encryption.

Organizations with sophisticated backup strategies may not need their files back, making traditional encryption-only extortion ineffective, said Lisa Forte, partner at cybersecurity training and consulting provider Red Goat Cyber Security. Plus encrypting and decrypting are tricky: Often the malware would be so aggressive that it would corrupt files, so even if the victim paid and they got the decryption key, the file would be corrupted. So, it was quite difficult to make a business case for companies to pay the ransom, Forte said. But threats to publish sensitive stolen data add new pressure.

And even when victims lack good backups making encryption attacks particularly painful some extortionists may still prefer the speed and efficiency of data theft-only attacks, Hofmann said.

Forte noted that while CL0P totally avoided encryption in its attack on MOVEit, many other threat actors have kept it in play. Even extortionists that, too, primarily use data theft as leverage against their victims often still lock up some parts of a victims network, as an opening salvo. The drama of a sudden file encryption and a ransomware splash screen appearing can grab victims attention.

One minute you think youre fine, and then next minute everything is locked, and youve got splash screens on every device, Forte said. That really brings the attention of the board. But definitely the main negotiating chip is the data thats stolen.

Liska has also seen some attackers adopt a new method of denying victims access to their files, creating a dramatic disruption while avoiding the technical complications and hassles of encryption. In these attacks, perpetrators exfiltrate their targets data then secure delete those files. Such a move rewrites the erased files with meaningless data, to prevent victims from being able to recover them. Extortionists can then demand ransom in exchange for sending victims back a copy of that exfiltrated data.

When we talk about taking the data and then secure deleting it, in effect you are actually stealing it at that point, because the data is no longer sitting on their [hard drives] unless it can be restored from backups. That's where I think this is going to go I think we'll see more of that, Liska said.

Of course, as Liska noted, victims might restore data from backups. But extortionists could still threaten to publish it.

In the MOVEit compromise, not even CL0P seemed prepared for how much data it managed to steal.

The hackers appeared to hurry to exploit as many systems as possible with the zero day before a patch could be issued. That meant they were scooping up data without necessarily knowing who it came from. Since then, the hackers have been working to sort through their stores of data, Liska said.

Notably and unusually rather than contact its victims with extortion demands, CL0P instead posted a message on its dark website telling victims to contact it.

They basically said, Hey, if you were one of the victims, email us, Liska said. They didn't even have a good accounting of who all they hit.

Organizations should take the threat seriously but shouldnt rush to comply, Hofmann said. Past incidents have seen some threat actors only discover who theyd hit when the victims got in touch, and victims that begin negotiations without a clear plan in place risk making the situation worse for themselves. They draw threat actors attention and might make mistakes, such as inadvertently revealing how badly attacks have affected them, thus handing leverage to the extortionists. In general, victims should never reveal anything that isnt already public knowledge, he said.

And victims should be wary of believing threat actors claims: Sometimes extortionists mistakenly think theyve impacted an organization, when theyve really hit another with a similar-looking website or one of the organizations subsidiaries, Hofmann said. CL0P may have made such mistakes, with ZDNET reporting in 2022 that CL0P tried to extort Thames Water, when it appeared to have actually hit South Staffordshire Water.

All this underscores the need for organizations including C-suite executives to participate in practicing and planning incident response and negotiations, to be ready should an extortion attack hit. For example, entities need to pin down details like, how much to tell the public; at what point they might engage with the extortionists and who will do that; as well as who will decide whether to pay and how that transaction will be made.

Despite the messiness of the attack, Liska believes CL0P has been improving its extortion tactics. Tracking of publicly known wallets suggests that the GoAnywhere hack didnt produce a lot of profit, but this time around, CL0P seems to have better determined how to monetize, he said.

CL0P has been gradually revealing its victims. This may in part indicate that its still working to sort through the stolen data, but also can be strategic, Liska said. Each new victim announcement returns public attention to the incident, keeping it in the news for months rather than weeks which may put more pressure on victims. Still, Hofmann said that, unlike for some past incidents, media reporting on MOVEit hasnt been critical of the impacted organizations: The optics of it, from a public perspective, are a little bit different, because many entities were affected via a trusted third-party vendor who was brought in specifically for protecting sensitive data.

Forte said CL0P appeared to struggle at first to determine which entities to extort in the affected software supply chain. Theyd compromised a file transfer tool created by Progress, and doing so let them obtain data handled by U.K. payroll solutions provider Zellis, for example. That data included payroll information on Zellis own clients, such as the BBC and British Airways.

There was a lot of confusion in the early days as to whether they were asking the actual end victims i.e., the BBC, British Airways, etc. or whether they were asking Zellis, or whether they were asking the company behind the MOVEit software, Forte said. The problem they had was that they didn't realize the complexity of the supply chain that they were hitting.

When choosing which impacted entities to threaten, cyber extortionists are often playing for media attention, Liska said. They typically threaten publish data from whichever impacted entities within the supply chain have the biggest name recognition. Threatening widely recognized end users will get more publicity, even if it technically was another entity's software that was compromised.

It doesnt matter whether or not they actually hit Ernst & Young or PwC. What matters is there's EY and PwC data that they got there, Liska said. You have to write about that as a journalist, because they are such big companies and they [cyber extortionists] know that.

CL0P said it would delete any data it had stolen from government, per TechCrunch. Opinions vary over whether organizations can believe these kinds of claims.

On the one hand, cyber criminals have a brand to protect, and some ransomware groups have followed through on promises to help restore data stolen from hospitals, for example, Forte said.

Victims have little motive to pay criminals who are known to go back on their word: The ransomware groups in general tend to be quite honorable to their word. They need to do that because they have to maintain a good brand image to get insurers, etc., to pay them when they hit other companies.

Plus, ransomware actors may hope that deleting data from entities like governments and hospitals could make them less of a priority for federal law enforcement. They also may hope it helps their image so they dont look quite so evil, she said.

Liska, meanwhile, said cyber criminals often give lip service to deleting the data in hopes of easing authorities attention on them, but he expects CL0P to still share or sell the government data.

You should never assume a ransomware actor is actually going to delete stolen data they will claim it up and down [but] once that data is stolen, that is out there and you have to assume that its going to be out there forever, Liska said.

One possible buyer? The Russian government. There appears to be some evidence suggesting a level of coordination between some cyber crime groups and the Russian government, which could enable gangs like CL0P to make such a sale, Liska said. But he cautioned against overstating this relationship, emphasizing the unavailability of evidence to indicate the Russian government is controlling the cyber criminals.

View original post here:
Cyber Extortion Trends: Lessons from CL0P and MOVEit - Government Technology

Read More..