Page 1,011«..1020..1,0101,0111,0121,013..1,0201,030..»

Illegal Cryptocurrency Mining Operation Shut Down in Malaysia – Crypto Briefing

Share this article

A recent crackdown by authorities in Miri, Borneo, led to the seizure of 34 cryptocurrency mining servers that were found to be running off stolen electricity, according to a report from local Malaysian publication The Borneo Post:

All the equipment used for the mining operation, including the direct tapping cables and servers, were seized. A police report has been lodged and an investigation is currently underway.

The operation was discovered following a tip from the public, and Sarawak Energy estimated that the operation was utilizing around 6,000 Malaysian ringgits ($1300) worth of stolen electricity monthly.

This seizure is the latest in a series of actions against illegal mining in the area, including an incident earlier this year in the state of Senadin, where 137 servers were seized. These activities have put additional pressure on energy providers and authorities alike, leading to increased efforts to counteract illegal operations.

The illegal mining operations discovery comes among Bitcoins network difficulty reaching record levels in 2023. The mining ecosystem has become highly competitive, with some experts suggesting it would only get worse.

This is because the Bitcoin Halving is set to happen in April 2024. Many experts say that since the network reaching record-high levels, the reward could be difficult to reach because it is estimated that mining one BTC will cost a company upward toward $30,000with the reward being a measly 3.125 BTC, worth around $92,000 at the time of writing:

Nearly half of the miners will suffer given they have less efficient mining operations with higher costs.

As of right now, mining one Bitcoin costs a company around $10,000-$15,000, with the reward being 6.25 BTC, or around $184,000.

The information on or accessed through this website is obtained from independent sources we believe to be accurate and reliable, but Decentral Media, Inc. makes no representation or warranty as to the timeliness, completeness, or accuracy of any information on or accessed through this website. Decentral Media, Inc. is not an investment advisor. We do not give personalized investment advice or other financial advice. The information on this website is subject to change without notice. Some or all of the information on this website may become outdated, or it may be or become incomplete or inaccurate. We may, but are not obligated to, update any outdated, incomplete, or inaccurate information.

You should never make an investment decision on an ICO, IEO, or other investment based on the information on this website, and you should never interpret or otherwise rely on any of the information on this website as investment advice. We strongly recommend that you consult a licensed investment advisor or other qualified financial professional if you are seeking investment advice on an ICO, IEO, or other investment. We do not accept compensation in any form for analyzing or reporting on any ICO, IEO, cryptocurrency, currency, tokenized sales, securities, or commodities.

See full terms and conditions.

Read more from the original source:
Illegal Cryptocurrency Mining Operation Shut Down in Malaysia - Crypto Briefing

Read More..

Council votes to regulate cryptocurrency mining within city limits – Stuttgart Daily Leader

The Stuttgart City Council unanimously voted to regulate data centers that could move into the city in the future. All aldermen voted for the measure except Cache Bledsaw, who did not attend the special called meeting on Friday.

The council adopted Ordinance 2012 which puts restrictions on noise levels crypto-mining data centers can create.

Code Enforcement Officer Eric Mahfouz said these centers are quite loud, as high as 85 decibels, which would create a nuisance for neighbors.

He said these centers are just boxes that house servers and create noise and heat.

Mahfouz said Act 851 of 2023 will go into effect on Aug. 1, and once the law goes into effect, there will be little cities and counties can do to regulate data centers used to mine for cryptocurrency. The new Arkansas law limits the kinds of regulations local governments can implement on digital asset mining companies by prohibiting discrimination against them and requiring any regulations to treat them the same as data centers.

There are other cities and counties all over the state having to act quickly because once (Act 851) goes into effect, regulations cannot be passed to regulate these data centers, Mahfouz explained.

Since the city passed Ordinance 2012, which spells out what the noise levels must be as well as the consequences for non-compliance with the ordinance.

City Attorney Robert Dittrich said the citys ordinance is the same ordinance passed by Arkansas County Quorum Court in its July meeting.

The ordinance was passed with an emergency clause, so it went into effect immediately, and since it was in place before Aug. 1, 2023, it is enforceable even with Act 851 going into effect.

Continued here:
Council votes to regulate cryptocurrency mining within city limits - Stuttgart Daily Leader

Read More..

Artificial Intelligence and Digital Diplomacy – E-International Relations

The coronavirus pandemic (COVID-19) has given a strong impetus to the development of science, the general processes of digitalization and the introduction of an increasing number of electronic services. In healthcare, these processes manifested in creating tracking applications, information-sharing platforms, telemedicine, and more. However, the boom in introducing such technologies also showed the need to develop particular policies and legal mechanisms to regulate their implementation, as although they can provide benefits, their use can also pose potential risks, such as cyberattacks. Digital technologies have also become widely used in politics. Due to the lockdowns around the world during 2020 and 2021, many ministerial meetings and meetings between heads of state were held online. International organizations such as the United Nations (UN) have resorted to mixed event formats allowing presidents to speak online.

The possibilities of the Internet and the application of digital technologies are not new. However, their entry into the political atmosphere, where everything is permeated with diplomatic protocols and certain secrecy, causes some concern. Perhaps the most apparent concern is using Deepfake technology to digitally manipulate another persons appearance. With modern AI technology, voice imitation is also possible.

Diplomatic channels may be scrutinized by the intelligence agencies of other countries and potential criminal groups that can gain access to specific technologies, such as wiretapping. Quite often, secret data (photos, videos, audio recordings) as well as fake news, which veracity an ordinary person cannot verify in any way, appear in the press. Such manipulations pose a significant threat to social stability and affect public opinion. Modern technologies can also be used in the political struggle against competing forces. Therefore, there is a need to rethink the familiar political process, considering new realities and possibly developing new digital or electronic diplomatic protocols.

The study of the application of the possibilities of AI in politics is a young field. Thus, a search as of June 23, 2023, in Google Academy for the query artificial intelligence in politics returns 61 results, and AI in politics has 77 results. Similar queries to the Google search engine for the same period produce 152,000 and 95,600 results, respectively. Publication sources are generally not political journals. More often, these journals publish articles on new technologies and deal with AI uses ethical aspects (Vousinas et al., 2022).

Speaking about the modern understanding of the concept, what is AI? Kaplan and Haenlein (2019) define it as a systems ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation. Suppose this definition is interpreted in relation to politics. In that case, we think AI can be described as a system that allows politicians to process information received from different sources and generalize it to develop a single database used in the decision-making process.

AI can also be used for internal political goals. Thus, a study in Portugal (Reis and Melo, 2013) suggested that the introduction of e-services plays an active role of governments in responding to the needs of their citizens, contributing to the development of e-democracy. The article points to increased transparency and trust in political institutions due to its widespread use. In our opinion, the paper lacks an analysis of possible counter-effects where AI can become a weapon for falsification and lowering the level of democracy. What are the mechanisms of interaction between the population and political institutions with the complete digitalization of the process? This issue requires a detailed assessment, especially in countries where the principles of democracy are often violated.

The possibility of AI bias poses a potential risk and presents a new challenge for the global community, including politicians. In 2018, a study published by the Council of Europe accessed possible risks of discrimination resulting from algorithmic decision-making and other types of AI use (Zuiderveen Borgesius, 2018). Today, with advanced technologies, transferring decision-making power to algorithms can lead to discrimination against vulnerable groups, such as people with disabilities. Therefore, program decision-making need not always be based on cost-effectiveness principles. E.g., caring for vulnerable population groups is a kind of burden for the countrys budget. However, it is obligatory in a state of law and ethically justified.

The possibility of making political decisions by the heads of state and government based entirely on AI proposals is also quite controversial. Since even the most rational decision from the algorithms point of view may be devoid of ethical grounds, contradict the political slogans of the leaders, or go against the objectives of the government or provisions of the law. Therefore, human control and policy adjustment at this stage of scientific development are mandatory; we believe it will continue to be relevant even in the future.

From the point of view of the private use of AI at the level of an individual state, the possibilities are also wide. For example, online search engines such as Google have significant information about users and their preferences. Accordingly, this information can be used for more harmless purposes. For example, it can be used for targeted advertising of political companies. Also, based on the processing of requests from the population, the most pressing issues that require a response can be identified. With the help of AI, special tools for collecting feedback from the population can be developed, improving communication between the government and the population potential voters. Accelerating and automating the delivery of services to the population, such as, for example, issuing a necessary document, or certificate of employment, is also among the potential beneficial results of the AI application. It should be noted that, to varying degrees, the mentioned opportunities are already actively used in countries with high economic development.

However, the application of AI can also be used to spread misinformation and manipulation of public opinion. Thus, AI tools are already used to launch campaigns for mass disinformation and disseminate fake content. Fake news is sometimes observed during election campaigns.

Today, the advent of new technologies creates new challenges. Thus, the GDPR (General Data Protection Regulation), adopted in Europe 2018, obliges informing about data collection. Moreover, in 2021 a new proposal for broad AI regulation within the EU was put forward. If adopted, the document will become the first comprehensive AI regulatory framework (Morse, 2023). Adopting such a law puts on the agenda the need for international regulation. Perhaps shortly, various countries around the world will begin the process of developing and adopting similar laws. However, the development and adoption of any law require the participation of political institutions, which creates a new direction of activity and research within political science.

The global application of AI laws is also a political issue. Thus, a similar document the UN cybercrime convention is already under discussion. However, such laws, especially on a global level, will also have to be based on protecting human rights to exclude the legitimization of increased political control over the population on the Internet. Moreover, in the context of globalization, the mechanism of control over AI-related crimes and punishment implementation are also unclear.

The use of digital platforms for diplomatic processes, such as negotiations, networking, and information exchange, has created a new field in the scientific literature Digital diplomacy. Digitalization of diplomacy takes place on different levels. Thus, ministries and politicians create their profiles on social media/networks, where they share their opinions on specific issues. It is no longer necessary to wait for an official briefing from the Foreign Ministry. Diplomats often express their position online, which can be considered a semi-official approach. Ultimately, the publication can permanently be deleted or referred to as the page has been hacked; in modern conditions, such a risk exists.

Recently, with the launch of ChatGPT, the media has been filled with articles about its role in the future of diplomacy. Diplomats can use AI to automate some of their work, such as preparing press releases. Another possibility is that the prepared information can be distributed simultaneously to all information platforms with a one click, which simplifies and speeds up the process. It is crucial as today residents receive information via the Internet most often directly to their smartphones. However, full automation, in this case, is also not without risks.

Although AI can be used to generate ideas, there is some concern about the secrecy of information processing. There is already a threat and information about data leakage entered into ChatGPT (Derico, 2023; Gurman, 2023). How safe is this in the case of secret or diplomatic documents? Or personal information of the diplomat who uses the platform? Moreover, the language of diplomacy is very sensitive regarding wording and expressions used. The text generated by the program may be ideal in terms of grammar but unacceptable in terms of diplomacy.

Use of AI and general digitalization in society also impact diplomacy. Nevertheless, are we ready for politics generated by AI? AI opens a new page in politics and creates challenges. Diplomacy always requires a certain amount of flexibility from diplomats, but it must be adapted to digital realities. Politicians and diplomats should be prepared for the possibility of data leakage on the Internet, as well as double-check incoming information.

The potential for bias in AI algorithms is also a significant issue. Moreover, the veracity can be zero since the program is designed to issue an answer, whether it is correct or not, and its content depends on the algorithms specified by the developers. Automation of collecting information in political processes is only sometimes justified. Thus, the human brain cannot physically remember and process the enormous amount of information generated daily. Moreover, if a political officer collects information from official resources, this can simplify the work. However, a reference to an unconfirmed resource may lead to a distortion of the original data and, accordingly, adversely affect the preparation of the report. However, such tools can be extremely useful for politicians when addressing public inquiries and identifying the most pressing issues.

The regulation of AI in practice has some peculiarities. At this stage of historical development, AI still cannot implement decisions independently in real-world practice. It can only implement the tasks that people have assigned it to do. We can analyze the benefits of its use, but ChatGPT and similar models only process information obtained from sources such as the Internet. Yet, the potential of regulation of global politics by AI or its specific programming can expose us to the threat of digital totalitarianism, when control can begin to interfere with privacy and human rights. Therefore, legal regulation of AI use is crucial. Moreover, its algorithms should undergo an ethical and political assessment before implementation. Moreover, various countries are interested in obtaining intelligence information in real-life conditions. Given the development of science, the intervention of intelligence services will receive new opportunities. However, in practice, regulation in this area is rarely possible. Moreover, AI is developing fast, and how it will be applied in practice when it reaches independence is an issue we will have to solve.

Reference

Derico, Ben. 2023. ChatGPT Bug Leaked Users Conversation Histories. BBC News, March 22, 2023, sec. Technology. https://www.bbc.com/news/technology-65047304.

Gurman, Mark. 2023. Samsung Bans Generative AI Use by Staff after ChatGPT Data Leak. Bloomberg.com, May 2, 2023. https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-and-other-generative-ai-use-by-staff-after-leak#xj4y7vzkg.

Morse, Chandler. 2023. Lessons from GDPR for Artificial Intelligence Regulation. World Economic Forum. June 16, 2023. https://www.weforum.org/agenda/2023/06/gdpr-artificial-intelligence-regulation-europe-us/.

Kaplan, Andreas, and Michael Haenlein. Siri, Siri, in my hand: Whos the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence.Business horizons62, no. 1 (2019): 15-25.

Reis, Joo Carlos Gonalves dos, and Nuno Melo. E-Democracy: Artificial Intelligence, Politics and State Modernization. (2023).

Vousinas, Georgios L., Ilektra Simitsi, Georgia Livieri, Georgia Chara Gkouva, and Iris Panagiota Efthymiou. Mapping the road of the ethical dilemmas behind Artificial Intelligence.Journal of Politics and Ethics in New Technologies and AI1, no. 1 (2022): e31238-e31238.

Zuiderveen Borgesius, Frederik. Discrimination, artificial intelligence, and algorithmic decision-making.lnea], Council of Europe(2018).

See original here:
Artificial Intelligence and Digital Diplomacy - E-International Relations

Read More..

Top 4 Uncommon Career Paths In The Cryptocurrency Industry – The VR Soldier

The cryptocurrency industry has grown exponentially over the past decade, giving rise to numerous career opportunities beyond the traditional roles of traders, developers, and analysts. As blockchain technology continues to disrupt various sectors, new and uncommon career paths are emerging. Lets delve into the top four uncommon career paths in the cryptocurrency industry.

As regulatory frameworks around cryptocurrencies become more stringent, the demand for compliance specialists is on the rise. These professionals play a crucial role in ensuring that crypto businesses adhere to legal and regulatory requirements. They develop and implement compliance policies, conduct risk assessments, and navigate complex compliance challenges to maintain legal integrity in this dynamic and rapidly evolving industry.

With the surge in DeFi projects and smart contracts, the need for auditors has increased significantly. DeFi auditors review and assess the security and functionality of smart contracts and decentralized applications. Their role is vital in identifying potential vulnerabilities and ensuring the safety of users funds and data within the decentralized ecosystem.

As the interest in cryptocurrencies grows, so does the need for accurate and insightful reporting. Crypto journalists cover breaking news, industry trends, and developments in the crypto space. They communicate complex concepts to a broader audience and help shape public perception by providing well-researched, unbiased, and informative content.

In the competitive cryptocurrency market, marketing plays a crucial role in building brand awareness and driving user adoption. Crypto marketing specialists use their expertise to develop effective strategies that target crypto enthusiasts and potential investors. They utilize social media, content creation, influencer partnerships, and community engagement to promote cryptocurrencies and blockchain projects.

In conclusion, the cryptocurrency industry offers a diverse range of career opportunities beyond the well-known roles. Whether its ensuring regulatory compliance, auditing smart contracts, reporting on the latest developments, or driving marketing campaigns, these uncommon career paths are gaining prominence in the ever-evolving world of cryptocurrencies. Embracing these roles can not only open up exciting opportunities for professionals but also contribute to the continued growth and legitimacy of the cryptocurrency ecosystem.

Disclosure: This is not trading or investment advice. Always do your research before buying any cryptocurrency or investing in any service.

Follow us on Twitter@thevrsoldierto stay updated with the latest Crypto, NFT, and Metaverse news!

Image Source: peshkov/123RF//Image Effects byColorcinch

View original post here:
Top 4 Uncommon Career Paths In The Cryptocurrency Industry - The VR Soldier

Read More..

Artificial intelligence: Is Remini safe to use? Remini, baby AI generator takes TikTok by storm as some raise security concerns – WLS-TV

CHICAGO (WLS) -- Have you ever wondered what your future kids will look like?

Well, there's an app for that. It's called "Remini." The app uses artificial intelligence to generate photos of what your children could look like, but one cyber security expert is voicing concerns as Remini takes social media by storm. Daisy Reyes is a self-proclaimed TikTok influencer with nearly 500,000 followers.

"I'm always looking at what's the upcoming trend, the new trends. I've seen everybody, literally everybody doing this trend. So you already know, I had to hop on it!" Reyes said.

READ MORE | Looking at AI, ChatGPT: The possibilities and pitfalls of artificial intelligence

Reyes said she recently downloaded the Remini app, which allows you to see what your future children could look like. She uploaded a picture of herself and her boyfriend, Rex, and boom! An image of her future baby was generated by artificial intelligence.

"When she showed it to me, I was kind of stunned. I thought it did kind of look like me," said Rex Flores.

"I felt like it was a mixture, but definitely more like him," Reyes added.

Melissa McDuffie, who has nearly 200,000 followers on TikTok, said she also tried the Remini app.

"I really like this trend, because I'm at the age where I'm married, and I'm ready to start having children. I thought it would be interesting to go ahead and see what they might look like," McDuffie said. "Artificial intelligence, the AI, can take your pictures and formulate this image, and it's so realistic. I'm an aunt to eight children, and they look very similar to my nieces. And, it's exciting. They're really cute!"

SEE ALSO | Paul McCartney says AI was used to create the 'last Beatles record'

But, as fun as the app may be, cyber security expert David Barton said users should be cautious.

"The scary thing about this, Samantha, is how accurate it has been. I've seen video clips of folks who have taken the father and the mother, put them in the app and it kicks off a picture that looks like their kid. And, that's a little bit creepy. But, on the flip side, it's kind of cool. It's kind of novel," Barton said.

Barton said users need to read the terms and conditions of these types of apps to understand how your image and likeness could be used. It's also important to know what protections are in place to ensure the artificial intelligence isn't exploited by a third party.

"Are we unintentionally giving future pictures of our kids for folks who might be using it for malicious purposes? I don't know," Barton said. "My gut says, I wouldn't do it. But, I'm a little bit older and more conservative than a lot of people. If you're going to do it, understand there are risks. At the end of the day, we AI-manage our lives by the risks we deal with day in and day out."

READ MORE | Elon Musk announces new company xAI

The social media stars say they understand any potential risk, but believe the Remini app has brought joy to many of their followers who also use it.

"We post pictures on social media every day, let alone the new AI generator with this app. So, I feel like you take that risk and agree to be on social media," McDuffie said. "For myself, I want to be a mother so bad, I could cry. More than anything, I want to be a mother. So, to be able to see an image like that made me happy."

In a statement, the parent company of Remini, which is based out of Milan, Italy, told the I-Team:

"Remini gives users the ability to imagine their lives in many different ways, with stunning realism, and we care deeply about ensuring all our users have a safe and fun experience using our app. By its very nature, the app is constantly evolving, and we will continue to take action to apply safeguards and ensure user privacy...We take data protection and privacy very seriously and have robust protocols in place to ensure we safeguard user rights while allowing them to experience and enjoy the transformative power of generative AI."

McDuffie said the app has intensified her baby fever.

"It made me hopeful for the future, and excited," McDuffie said. "Because they were cute and beautiful. They were beautiful."

Bending Spoons, the company that owns Remini, told the I-Team that facial recognition is not used in the app, and that images are encrypted and stored with a reputable U.S.-based provider, using what they say are "state-of-the-art security standards." The company said users always retain control over their data, and that it does not sell, lease, or trade users' images to any third parties.

The company said it applies comprehensive safeguards to thwart misuse of content.

SEE ALSO | AI leaders warn the technology poses 'risk of extinction' like pandemics, nuclear war

Read more here:
Artificial intelligence: Is Remini safe to use? Remini, baby AI generator takes TikTok by storm as some raise security concerns - WLS-TV

Read More..

OpenAIs Sam Altman links 2 hot tech trends with his new Worldcoin: artificial intelligence and crypto. But theres a lot more to the story – Fortune

Artificial intelligence has taken over much of the financial hype cycle that used to belong to cryptocurrency. Now comes a project thats trying to combine the two. Called Worldcoin, its an effort to create a global network of digital identities for a world in which AI robots become harder to distinguish from humans. Users of the service scan their eyeballs to create digital credentials and are rewarded with Worldcoin tokens though the cryptocurrency isnt available in the US. More than 2 million people have signed up for a World ID, a reflection of the novel compensation model and the reputation of one of its founders, Sam Altman, the chief executive officer of OpenAI, which created the popular ChatGPT chatbot service. But early scrutiny by international regulators and some data security problems have stirred controversy and threatened to slow Worldcoins momentum.

The project uses a device called an orb which looks like a Magic 8 Ball but bigger and silver-colored to scan a persons iris, which has a unique pattern in every human much as a fingerprint does. That creates a World ID, which grants its holders proof of personhood a way to verify their identities on various online services without disclosing their name or other personal data. Worldcoin is also the name of the cryptocurrency thats used to reward people who scan their eyeballs or who support the project. TheWorldcoin Foundationis listed as the steward of the technology, but the organizers say that it has no owners or shareholders and that holders of Worldcoin tokens will have a say in the direction of the project. Worldcoin is also affiliated with a tech company called Tools for Humanity Corp. that says it was established to accelerate the transition towards a more just economic system.

Worldcoin is promising to link two of the hottest contemporary financial trends: artificial intelligence and crypto. As AI becomes more popular, the argument goes, World ID will become more needed, to help distinguish between humans and AI-powered smart software. Another big reason for the build-up is the involvement of Altman, whos the public face of ChatGPT. The AI chatbot was introduced in November 2022 and ignited the publics imagination about what artificial intelligence can do.

There are several. One is that its creating tokens to compensate participants outside the US and the other excluded countries who scan their iris. Also, several of the projects early backers were swept up in last years crypto collapse, including FTX founder Sam Bankman-Fried, whos under house arrest and facing fraud charges. An MIT Technology Review investigation found evidence of what itcalleddeceptive and exploitative practices used by Worldcoin to attract participants in countries such as Indonesia, Ghana and Chile. The project is beingscrutinized in Europefor its collection of biometric data, which may run afoul of some countries privacy laws. There have also been issues with the theft of login credentials from some Worldcoin operators who were signing up new users, and with black-market sales of World IDs. Worldcoin said it upgraded its security in response.

The project had registered and created digital identities for more than 2.1 million people by the end of July, though the vast majority of those were issued before the official July 24 launch. The related cryptocurrency has fluctuated. The price of a Worldcoin token roughlydoubledon that day to as high as $3.58 before dropping to as low as $1.92 a week later. But Worldcoin still had a total market capitalization of $267 million on July 31, according to CoinMarketCap.

Altman, 38, is a seasoned entrepreneur. In addition to leading OpenAI, he was the longtime president of Y Combinator, the startup accelerator, and has investments in Airbnb, Stripe, Dropbox and Instacart. He also co-founded Loopt, a smartphone-location service.

Altman has said the project wouldnt offer tokens in the US and in some other countries where the regulatory rules regarding crypto were either uncertain or unclear. Indeed, Worldcoin is among many crypto projects that have chosen to stay out of the US market in recent years as US regulators and lawmakers continue to grapple with which coins are classified securities and which ones arent. Gary Gensler, the chairman of the US Securities and Exchange Commission, had long said that most coins were securities. But in a closely followed legal case, a judge ruled in July that Ripple Labs Inc.s XRP token is a security only when its sold to institutional investors but not when its sold to retail investors via exchanges. That left the matterunsettled. More litigation and regulation are sure to follow, leaving crypto issuers with uncertainty.

Originally posted here:
OpenAIs Sam Altman links 2 hot tech trends with his new Worldcoin: artificial intelligence and crypto. But theres a lot more to the story - Fortune

Read More..

Lazarus hackers linked to $60 million Alphapo cryptocurrency heist – BleepingComputer

Blockchain analysts blame the North Korean Lazarus hacking group for a recent attack on payment processing platform Alphapo where the attackers stole almost $60 million in crypto.

Alphapo is a centralized crypto payment provider for gambling sites, e-commerce subscription services, and other online platforms, which was attacked on Sunday, July 23rd, with the initial stolen amountestimated to be $23 million.

This theft included over 6 million USDT, 108k USDC, 100.2 million FTN, 430k TFL, 2.5k ETH, and 1,700 DAI, all drained from hot wallets, likely made possible by a leak of private keys.

Well-known crypto chain investigator "ZackXBT" warned yesterday that the attackers also drained an additional $37M of TRON and BTC, as seen on Dune Analytics data, raising the total amount stolen from Alphapo to $60,000,000.

Moreover, ZackXBT claimed that the attack appears to carry characteristics of a Lazarus heist and backed the claim by saying that Lazarus creates "a very distinct fingerprint on-chain," but no further details were provided.

The Lazarus Group is a North Korean threat actor with ties to the North Korean government, previously linked to the$35 million Atomic Wallet heist, the$100 million Harmony Horizon hack, and the$617 million Axie Infinity theft.

Typically, Lazarus uses fake job offers to lure employees of crypto firms to open infected files, compromising their computers and losing account credentials.

This creates an attack avenue into the victim's employer network, where they can get unauthorized access and meticulously plan and execute attacks costing millions of dollars.

Analyststrackingthemovementof the stolen funds to cryptocurrency exchanges report seeing laundering attempts through Bitget, Bybit, and others. At the same time, Lazarus is also known for usingsmall cryptocurrency mixing services.

Dave Schwed, COO of blockchain security companyHalborn, told BleepingComputer that the attackers likely stole private keys, allowing access to the wallets.

While we lack specifics, it seems that the alleged "hack" likely pertains to the theft of private keys. This inference comes from observing the movement of funds from independent hot wallets and the sudden halting of trading. Moreover, the subsequent transactions have led ZachXBT, a renowned "on-chain sleuth", to surmise that North Korea's notorious Lazarus group is the perpetrator of this attack.

Given their history of similar exploits, I find myself agreeing with this theory. -D. Schwed

At this time, BleepingComputer has not been able to independently confirm the involvement of the North Korean threat group in the Alphapo hack with blockchain analysis firms or law enforcement agencies.

We will update this post as soon as we know more.

See original here:
Lazarus hackers linked to $60 million Alphapo cryptocurrency heist - BleepingComputer

Read More..

Indonesia opens world’s first state-backed cryptocurrency trading … – Thailand Business News

Indonesia has launched the worlds first state-backed cryptocurrency exchange, supervised by the Commodities Futures Trading Supervisory Agency.

The bourse will list licensed crypto companies and aims to strengthen the regulatory environment for the countrys booming cryptocurrency sector. While the use of cryptocurrencies as a payment medium is currently prohibited, investments in cryptocurrency are allowed. Indonesia saw a significant increase in crypto trading in 2021, reaching $56 billion.

The new exchange requires businesses to obtain a crypto exchange provider (CEP) license and meet certain criteria. Indonesias fintech industry is competitive, with P2P lending and e-payment platforms dominating. Foreign fintech firms can fill the financing gap for underbanked adults. The country has a high smartphone penetration rate and a growing middle class, making it a potential battleground for digital payment apps.

Indonesia Launches Worlds First State-Backed Cryptocurrency Bourse (aseanbriefing.com)

About the author

ASEAN Briefing features business news, regulatory updates and extensive data on ASEAN free trade, double tax agreements and foreign direct investment laws in the region. Covering all ASEAN members (Brunei, Cambodia, Indonesia, Laos, Malaysia, Myanmar, Philippines, Singapore, Thailand and Vietnam)

See original here:
Indonesia opens world's first state-backed cryptocurrency trading ... - Thailand Business News

Read More..

Artificial intelligence could mean more litigation for restaurant … – Restaurant Business Online

Could artificial intelligence land more restaurant companies in court? If they rely on a computer brain to handle recruitment, the answer is definitely a yes, according to this weeks episode of the Working Lunch podcast.

The minute you use these tools, were going to see a lot of activity from the EEOC, or Equal Employment Opportunity Commission, said this weeks guest, Ed Egee, VP of government relations and workforce development for the National Retail Federation.

The issue, he told podcast co-hosts Joe Kefauver and Franklin Coley, is that trial lawyers are looking for a new gold mine, and this could be it.

He explained that using some artificial intelligence tool to sort through resumes can whittle down stacks of thousands to just the few that meet an employers key criteria for candidates. A machine would ignore everything but those desired characteristics.

The problem, Egee continued, is that lawyers could argue the process is discriminatory per se, since other traits or characteristics might be ignored. Similarly, some applicants might be ruled out instantly if theyre unskilled at drafting a resume, making them the victims of discrimination against the uneducated or poorly literate.

The likely way to avert discrimination suits, Egee said, would be involving humans in the screening function at some stage.

To learn more about this little-discussed risk associated with embracing artificial intelligence, download this weeks episode of Working Lunch from wherever you get your podcasts.

Members help make our journalism possible. Become a Restaurant Business member today and unlock exclusive benefits, including unlimited access to all of our content. Sign up here.

More here:
Artificial intelligence could mean more litigation for restaurant ... - Restaurant Business Online

Read More..

The EU Artificial Intelligence Act: What’s the Impact? – JD Supra

The EU Artificial Intelligence Act (or AI Act) is the worlds first legislation to regulate the use of AI. It leaves room for technical soft law; but, inevitably (being the first and being broad in scope), it will set principles and standards for AI development and governance. The UK is concentrating more on soft law, working towards a decentralized principle-based approach. The US and China are working on their own AI regulations, with the US focusing more on soft law, privacy, and ethics and China on explainable AI algorithms, aiming for companies to be transparent about their purpose. The AI Act marks a crucial step in regulating AI in Europe, and a global code of conduct on AI could harmonize practices worldwide, ensuring safe and ethical AI use. This article gives an overview of the EU Act, its main aspects as well as an overview of other AI legislative initiatives in the European Union and how these are influencing other jurisdictions, such us the UK, the US and China.

The AI Act: The First AI Legislation. Other Jurisdictions Are Catching Up.

On June 14, 2023, the European Parliament achieved a significant milestone by approving the Artificial Intelligence Act (or AI Act), making it the worlds first piece of legislation to regulate the use of artificial intelligence. This approval has initiated negotiations with the Council of the European Union which will determine the final wording of the Act. The final version of the AI Act is expected to be published by the end of 2023. Following this, the Regulation is expected to be fully effective in 2026. A two-year grace period similar to the one contemplated by the GDPR is currently being considered. This grace period would enable companies to adapt gradually and prepare for the changes until the rules come into force.

As the pioneers in regulating AI, the European institutions are actively engaged in discussions that are likely to establish both de facto (essential for the expansion and growth of AI businesses, just like any other industries) and de jure (creating healthy competition among jurisdictions) standards worldwide. These discussions aim to shape the development and governance of artificial intelligence, setting an influential precedent for the global AI community.

Both the United States and China are making efforts to catch up. In October 2022, the US government unveiled its Blueprint for an AI Bill of Rights, centered around privacy standards and rigorous testing before AI systems become publicly available. In April 2022, China followed a similar path by presenting a draft of rules mandating chatbot-makers to comply with state censorship laws.

The UK government, has unveiled an AI white paper to provide guidance on utilizing artificial intelligence in the UK. The objective is to encourage responsible innovation while upholding public confidence in this transformative technology.

While the passage of the Artificial Intelligence Act by the European Parliament represents an important step forward in regulating AI in Europe (and indirectly beyond, given the extraterritorial reach), the implementation of a global code of conduct on AI is also under development by the United Nations and is intended to play a crucial role in harmonizing global business practices concerning AI systems, ensuring their safe, ethical, and transparent use.

A Risk-Based Regulation

The European regulatory approach is based on assessing the risks associated with each use of artificial intelligence.

Complete bans are contemplated for intrusive and discriminatory uses that pose unacceptable risk to citizens fundamental rights, their health, safety, or other matters of public interest. Examples of artificial intelligence applications considered to carry unacceptable risks include cognitive behavioral manipulation targeting specific categories of vulnerable people or groups, such as talking toys for children, and social scoring, which involves ranking of people based on their behavior or characteristics. The approved draft regulation significantly expands the list of prohibitions on intrusive and discriminatory uses of AI. These prohibitions now include:

In contrast, those uses that need to be regulated (as opposed to simply banned) through data governance, risk management assessment, technical documentation, and criteria for transparency, are:

High-Risk AI systems are artificial intelligence systems that may adversely affect security or fundamental rights. They are divided into two categories:

(i) biometric identification and categorization of natural persons;(ii) management and operation of critical infrastructure;(iii) education and vocational training;(iv) employment, worker management and access to self-employment;(v) access to and use of essential private and public services and benefits;(vi) law enforcement;(vii) migration management, asylum, and border control;(viii) assistance in legal interpretation and enforcement of the law.

All high-risk artificial intelligence systems will be evaluated before being put on the market and throughout their life cycle.

The Generative and Basic AI systems/models can both be considered general-purpose AI because they are capable of performing different tasks and are not limited to a single task. The distinction between the two lies in the final output.

Generative AI, like the now-popular ChatGPT, uses neural networks to generate new text, images, videos or sounds that have never been seen or heard before, much as a human can. For this reason, the European Parliament has introduced higher transparency requirements:

Basic AI models, in contrast, do not create, but learn from large amounts of data, use it to perform a wide range of tasks, and have application in a variety of domains. Providers of these models will need to assess and mitigate the possible risks associated with them (to health, safety, fundamental rights, the environment, democracy, and the rule of law) and register their models in the EU database before they are released to the market.

Next are the minimal or low risk AI applications, such as those used to date for translation, image recognition, or weather forecasting. Limited-risk artificial intelligence systems should meet minimum transparency requirements that enable users to make informed decisions. After interacting with applications, users can decide whether they wish to continue using them. Users should be informed when interacting with AI. This includes artificial intelligence systems that generate or manipulate image, audio, or video content (e.g., deepfakes).

Finally, exemptions are provided for research activities and AI components provided under open-source licenses.

The European Union and the United States Aiming to Bridge the AI Legislative Gap

The United States is expected to closely follow Europe in developing its own legislation. In recent times, there has been a shift in focus from a light touch approach to AI regulation, towards emphasizing ethics and accountability in AI systems. This change is accompanied by increased investment in research and development to ensure the safe and ethical usage of AI technology. The Algorithm Accountability Act, which aims to enhance transparency and accountability of providers, is still in the proposal stage.

During the recent US-EU ministerial meeting of the Trade and Technology Council, the participants expressed a mutual intention to bridge the potential legislative gap on AI between Europe and the United States. These objectives gain significance given the final passage of the European AI Act. To achieve this goal, a voluntary code of conduct on AI is under development, and once completed, it will be presented as a joint transatlantic proposal to G7 leaders, encouraging companies to adopt it.

The United Kingdoms Pro-Innovation Approach in Regulating AI

On March 29, 2023, the UK government released a white paper outlining its approach to regulating artificial intelligence. The proposal aims to strike a balance between fostering a pro-innovation business environment and ensuring the development of trustworthy AI that addresses risks to individuals and society.

The regulatory framework is based on five core principles:

These principles are initially intended to be non-statutory, meaning no new legislation will be introduced in the United Kingdom for now. Instead, existing sector-specific regulators like the ICO, FCA, CMA, and MHRA will be required to create their own guidelines for implementing these principles within their domains.

The principles and sector-specific guidance will be supplemented by voluntary AI assurance standards and toolkits to aid in the responsible adoption of AI.

Contrasting with the EU AI Act, the UKs approach is more flexible and perhaps more proportionate, relying on regulators in specific sectors to develop compliance approaches with central high-level objectives that can evolve as technology and risks change.

The UK government intends to adopt this framework quickly across relevant sectors and domains. UK sector specific regulators have already received feedback on implementing the principles during a public consultation that ran until June 2023, and we anticipate further updates from each of them in the coming months.

The Difficult Balance between Regulation and Innovation

The ultimate goal of these legislative efforts is to find a delicate balance between the necessity to regulate the rapid development of technology, particularly regarding its impact on citizens lives, and the imperative not to stifle innovation, or burden smaller companies with overly strict laws.

Anticipating the level of success is challenging, if not impossible. Nevertheless, the scope for soft law such as setting up an ad hoc committee at a European level shows promise. Ultratechnical matters subject to rapid evolution require clear principles that stem from the value choices made by legislators. Moreover, such matters demand technical competence to understand what is being regulated at any given moment.

Organizations using AI across multiple jurisdictions will additionally face challenges in developing a consistent and sustainable global approach to AI governance and compliance due to the diverging regulatory standards. For instance, the UK approach may be seen as a baseline level of regulatory obligation with global relevance, while the EU approach may require higher compliance standards.

As exemplified by the recent Italian shutdown of ChatGPT (see ChatGPT: A GDPR-Ready Path Forward? we have witnessed firsthand the complexities involved. The Italian data protection authority assumed a prominent role and instead of contesting the suspension of the technology in court, the business chose to cooperate. As a result, the site was reopened to Italian users within approximately one month.

In line with Italy, various other data protection authorities are actively looking into ways to influence the development and design of AI systems. For instance the Spanish AEPD has implemented audit guidance for data processing involving AI systems, more detail here, while or the French CNIL has created a department dedicated to AI with open self-evaluation resources for AI businesses, more detail here. Additionally, the UKs Information Commissioners Office (ICO) has developed an AI toolkit (available here) designed to provide practical support to organizations.

From Safety to Liability: The AI Act is Prodromic to an AI Specific Liability Regime

The EU AI Act is part of a three-pillar package proposed by the EU Commission to support AI in Europe. The other pillars include an amendment to the EU Product Liability Directive (PLD) and a new AI liability directive (AILD). While the AI Act focuses on safety and ex ante protection/ prevention re fundamental rights, the PLD and AILD address damages caused by AI systems. Non-compliance with the AI Acts requirements could also trigger, based on the AI Act risk level of the AI system at issue, different forms and degrees of alleviation of the burden of proof under both the amended PLD, for the no-fault based product liability claims, the AILD, for any other (fault based) claim. The amended PLD and the AILD are less imminent than the AI Act: they have not yet been approved by the EU Parliament and, as directives, will require implementation at the national level. Yet the fact that they are coming is of immediate importance and use, as it gives businesses even more reason to follow and possibly cooperate and partake in the standard setting process currently in full swing.

Conclusion

Businesses using AI must navigate evolving regulatory frameworks and strike a balance between compliance and innovation. They should assess the potential impact of the regulatory framework on their operations and consider whether existing governance measures address the proposed principles. Prompt action is necessary, as regulators worldwide have already started publishing extensive guidance on AI regulation.

Monitoring these developments and assessing the use of AI is key for compliance and risk management. This approach is crucial not only for regulatory compliance but also to mitigate litigation risks with contractual parties and complaints from individuals. Collaboration with regulators, transparent communication, and global harmonization are vital for successful AI governance. Proactive adaptation is essential as regulations continue to develop.

[View source.]

Follow this link:
The EU Artificial Intelligence Act: What's the Impact? - JD Supra

Read More..