Page 627«..1020..626627628629..640650..»

AI And Generation Z: Pioneering A New Era Of Philanthropy – Forbes

Founder of Laulau-During the UK's 2023 cost-of-living crisis, He launched the Hot Meal Challenge to raise hot meals for Londoners facing food poverty. Under the patronage of Lord Woolley of Woodford and in partnership with Sufra, this social fundraiser supported hundreds of families across London.Fabio Richter

In an era marked by economic challenges, the resilience of human generosity is more evident than ever. The charity and non-profit organizations (NGOs) market has seen incredible growth, reaching US$305.2 billion in 2023 from US$288.97 billion in 2022. And the projections are even more promising, with estimates of the market reaching US$369.21 billion by 2027. Notably, Generation Z, born between 1997 and 2012, has contributed increasingly to this philanthropic surge. Despite their relatively lesser financial resources, studies show that Gen Z's charitable contributions are growing, reflecting their commitment to social and environmental causes. Recent data reveals that Generation Z, though younger and with comparatively less wealth, contributes significantly to this trend. This growth, amidst a weakened global economy, rising prices, and geopolitical volatility, underscores a collective commitment to positive impact.

Concurrently, there is a surge in the utility and popularity of AI adoption worldwide. The convergence of these two trendsexpanding NGO market and AI technologiesholds immense potential, as philanthropy proactively leverages AI's innovative capabilities. Experts in the field, such as James Hodson, also CEO of AI for Good Foundation, suggest that AI can revolutionize fundraising and operational efficiency in philanthropy, creating more impact per dollar donated, as evident in Lifeforce, a Humanitarian Aid 2.0 initiative.

While much has been discussed regarding AI's economic impact, with estimations from PWC of up to US$15.7 trillion added to the global economy by 2030, its potential influence on humanitarian action still needs to be explored.

While AI's economic impact is well-documented, its potential in humanitarian sectors is just starting to be realized. Emerging technologies can drive further innovation in the philanthropic sector, benefiting charities through personalized donor outreach strategies, optimized resource allocation, and streamlined decision-making processes. For example, the American Red Cross has implemented AI algorithms to predict donation trends, enabling them to allocate resources more effectively during crises.

While AI's economic impact is well-documented, its potential in humanitarian sectors is just starting to be realized. AI can revolutionize philanthropy with personalized donor outreach, optimized resource allocation, and more efficient decision-making.

For instance, Save The Children Australia enhanced donor outreach with AI-powered predictions, using data segmentation and their CRM system for effective donor targeting. They ranked donation profiles to target specific donors effectively. Similarly, Greenpeace Australia Pacific leveraged machine learning techniques to improve donor retention through a churn propensity model. By assigning scores to previous donation histories, the charity identified donors to re-engage successfully. Furthermore, SwissFoundations highlights the unexplored potential of AI in donor matching, reporting, impact evaluation, and increasing transparency and accountability within philanthropic organizations. These case studies illustrate how AI can provide actionable insights into donor behavior, leading to more targeted and successful fundraising strategies.

Gen Z is uniquely positioned to advance charitable work with their digital fluency and social media savviness. As digital natives, Gen Z brings a unique perspective to philanthropy. In addition to AI, the untapped potential of Gen Z, born between 1997 and 2012, can drive advancements in charitable work through new technologies. As the world's first generation of authentic digital natives, Their comfort with technology and social media paves the way for innovative approaches to charitable giving.

The intersection of AI and Gen Z presents a unique opportunity to shape the future of philanthropy. As these two forces continue to converge, the possibilities for innovation and positive impact are boundless. Research indicates that Gen Z donors prefer digital platforms for charitable engagement and are more likely to support causes that align with their values, emphasizing the need for NGOs to adapt to these preferences. This generation is more than just digitally competent; they are socially conscious. According to a study by McKinsey, 70% of Gen Z prioritize social impact in their spending and charitable giving, indicating a shift towards more conscientious consumerism.

Dubious stereotypes often dismiss Gen Z as a generation of self-absorbed and distractible youth, seemingly trapped by the addictive allure of social media and limited in their ability to appreciate the world beyond their personal experiences. However, according to a Forbes article in 2022, members of Gen Z are defying these expectations and emerging as the next generation of charitable donors, potentially surpassing their older counterparts in their willingness to support philanthropic causes. Their motivations stem from a deep sense of conviction.

Gen Z distinguishes itself as a charitable demographic and takes the lead in its chosen advocacies, spearheading digitally driven efforts to address philanthropic causes. The Hot Meal Challenge is a prime example of how Gen Z's digital savviness can be harnessed for philanthropic efforts. It also exemplifies Gen Z's philanthropic innovation, addressing food insecurity in the UK. Hot Meal Challenge is a viral fundraising campaign aimed at tackling the United Kingdom's pervasive cost-of-living crisis by providing hot meals to food-insecure households. Collaborating with Sufra, a prominent London-based food poverty charity, Gen Z members nominate each other via an app to donate hot meals.

Fabio Richter, the founder of the Hot Meal Challenge, firmly believes that philanthropy can drive meaningful global change, especially when harnessed with the power of technology. He states, "Through strategic giving and thoughtful investments, philanthropy can catalyze positive transformations in society. To fully unlock its potential, it is crucial to leverage technology, invest in local capacity-building, and collaborate with policymakers to enact long-term structural change."

Fabio emphasizes that Gen Z represents a highly promising donor market with which charities and non-governmental organizations (NGOs) should actively engage. "Surprisingly, non-profits often overlook generational cohorts like Gen Z and Millennials. While they may have less purchasing power compared to older generations, they outperform them in terms of annualized giving rates as a percentage of disposable income," he explains. "To effectively connect with younger generations, nonprofits must understand them from multiple dimensionsdemographically, behaviorally, and psychographically." This viral campaign brought widespread attention to food insecurity issues in the UK, demonstrating the power of social media in driving social change.

Gen Z challenges the prevailing stereotypes by actively contributing to philanthropic endeavors. Their digital savvy and deep commitment to causes make them a force to be reckoned with. Nonprofits and organizations should recognize the untapped potential of this emerging market and develop comprehensive strategies to engage and collaborate with Gen Z effectively. Fabio Richter, the founder, stated, "The success of this initiative is a testament to Gen Z's commitment to social change, harnessed through technology." Richter's approach in the Hot Meal Challenge exemplifies how combining technology with a deep understanding of Gen Z's communication styles can lead to successful philanthropic campaigns.

While AI offers numerous benefits, it also presents new challenges in terms of ethics and privacy. Balancing these aspects is crucial for sustainable growth in philanthropy. As AI technology holds immense promise in revolutionizing humanitarian work, advocates and supporters must remain vigilant about the inherent risks associated with emerging technologies. Managing the potential systemic risks posed by training AI systems, addressing the challenges of predictive decision-making, and ensuring transparency are all paramount. Fabio, an expert in the field, highlights the importance of preventing AI systems from perpetuating and exacerbating structural biases inherent in data.

When deployed carefully and strategically, AI can be an extraordinary catalyst for transforming humanitarian efforts, regardless of their humble origins. Pioneering digital philanthropic initiatives such as the Hot Meal Challenge are already reshaping the industry landscape and paving the way for tangible real-world impact. Speaking at the initiative's launch, Lord Woolley of Woodford emphasized its transformative potential in fighting poverty and restoring human dignity.

It's vital for organizations to establish ethical guidelines for AI use, ensuring that these technologies are used responsibly and transparently, with a focus on enhancing rather than replacing human decision-making in philanthropy. Ethical considerations, like data privacy and bias in AI algorithms, are crucial. Measures such as transparent AI development processes and regular ethical audits are essential to ensure these tools serve the greater good without unintended consequences. It is critical to remember that integrating AI and philanthropy must be rooted in fundamental human values underpinning charitable endeavors: service, compassion, and a steadfast commitment to envisioning a better future. As Lord Woolley aptly stated, this endeavor is not just about individual gain but collective action and collaboration in serving others.

The collaboration between AI and philanthropy has the power to drive significant change. Still, it must always be guided by a deep understanding of human values and a shared vision for a brighter tomorrow.

The intersection of AI and Gen Z presents a unique opportunity to shape the future of philanthropy. The convergence of AI, Gen Z values, and philanthropy is a powerful combination with the potential to reshape humanitarian efforts.

As these complementary forces gain momentum, upholding ethical integrity and managing risks becomes paramount to unlocking innovative potential. By embracing these trends, NGOs can unlock new potentials for impact and efficiency.

This approach promises technological advancement and a more compassionate and efficient philanthropic sector.

Moving forward, it is essential for philanthropic organizations to stay attuned to these technological advancements and generational shifts, ensuring that their strategies remain relevant and effective in the evolving landscape of philanthropy. The synergistic potential of this unique convergence still needs to be explored. This approach promises technological advancement and a more compassionate and efficient philanthropic sector. With courage, care, and conviction, AI and Gen Z have the opportunity to shape the course of philanthropy for the future and the lasting ascent of humanity.

Mr. Minevich is a highly regarded and trusted Digital Cognitive Strategist, Artificial Intelligence expert, Venture Capitalist, and the principal founder of Going Global Ventures. Mark collaborates and advises large global enterprises both in the US and Japan (Hitachi), and is the official AI and Future of Work Advisor to the Boston Consulting Group. Currently, he serves as the strategic advisor and Global ambassador to the CEO and Chairman of New York based IPsoft Inc.

Mark holds the role of senior fellow as part of the U.S. Council on Competitiveness in Washington, D.C., and maintains a position as senior adviser on Global Innovation and Technology to the United Nations Office of Project Services (UNOPS). He is an appointed member of the G20/B20s Digital Task Force, supplementing the group with expert knowledge on digitization, advanced autonomous systems, and the future of AI.

Mark is also the founder and Co-Chair of the World Artificial Intelligence Organization and AI Pioneers based in New York, and was appointed as the Global Digital Ambassador to the World Assessment Council in early 2020. He is the Strategic Advisor to SwissCognitive - "independent" Global AI think-tank in Switzerland, which aims to share, connect, and unlock the fullest potential of Artificial Intelligence.

Mark also advises several venture capital groups. He acts as a Fund Adviser to Bootstrap Labs based in San Francisco: a pioneer in the realm of VC funds focused on applied AI, carrying with it a mature fund and portfolio of 24 applied AI companies. Mark is also an Advisor to the AI Capital Venture Fund based in Colorado, which is a dedicated venture and private equity fund geared towards AI companies in the late-seed to growth-stage maturity level.

Mark is also a trusted Adviser and Entrepreneur in Residence for Hanaco Ventures, a global venture fund that focuses on late-stage, pre-IPO Israeli and US companies powered by bold, visionary, and passionate minds. Prior to this position, Mark was the Vice Chair of Ventures and External Affairs, as well as CTO at the Comtrade Group, an international technology conglomerate. He also served as the CTO and Strategy Executive at IBM, and held other management, technology, and strategy roles that entailed formulating investment tactics for Venture Capital Incubation programs.

Mark is also involved in media and journalism, and contributes to a number of publications, such as Forbes.com. His knowledge has been cited and his name has been featured in articles on an international scale.

Forbes named Mark one of the Leaders to Watch in 2017. He has received the Albert Einstein Award for Outstanding Achievement and the World Trade Leadership Award from the World Trade Centers and World Trade Centers Association. Mark has served as a venture partner with GVA Capital in Silicon Valley, advising the fund on AI startups. He has also served as venture advisor to Global Emerging Markets, an alternative investment group that manages a diverse set of investment vehicles. Mark was also involved with Research Board, a Gartner company and international think-tank advising CIOs at some of the worlds largest corporations, such as: Deutsche Bank, BTM Corporation, Geotek Communications, Qwest Communications, Comcast, and USWEB/CKS.

Original post:

AI And Generation Z: Pioneering A New Era Of Philanthropy - Forbes

Read More..

Implementing quality management systems to close the AI … – Nature.com

In HCOs, AI/ML technologies are often initiated as siloed research or quality improvement initiatives. However, when these AI technologies demonstrate potential for implementation in patient care, development teams may encounter substantial challenges and backtracking to meet the rigorous quality and regulatory requirements12,13. Similarly, HCO governance and leadership may possess a strong foundation in scientific rigor and clinical studies; however, without targeted qualifications and training, they may find themselves unprepared to offer institutional support, regulatory oversight, or mobilize teams toward interdisciplinary scientific validation of AI/MLenabled technologies required for regulatory submissions and deployment of SaMD. Consequently, the unpreparedness of HCOs exacerbates the translation gap between research activities and the practical implementation of clinical solutions14. The absence of a systematic approach to ensuring the effectiveness of practices and perpetuating them throughout the organization can lead to operational inefficiencies or harm. Thus, HCOs must first contend with a culture shift when faced with quality control rigor inherent to industry-aligned software development and deployment, specifically design controls, version control, installation qualification, operational qualification, performance qualification, that primarily focuses on end-user acceptance testing and the product meeting its intended purpose (improving clinical outcomes or processes compared to the standard of care or the current state), and the traceability and auditability of proof records (Table 1).

Consider that even in cases where a regulatory submission is not within the scope, it remains imperative to adhere to practices encompassing ethical and quality principles. Examples of such principles identified by the Coalition for Health AI and the National Institute for Standards and Technology (NIST) include effectiveness, safety, fairness, equity, accountability, transparency, privacy, and security3,7,15,16,17,18,19,20. It is also feasible that the AI/ML technology could transition from a non-regulated state to a regulated one due to updated regulations or an expanded scope. In that case, a proactive approach to streamlining the conversion from a non-regulatory to a regulatory standard should address the delicate balance of meeting baseline requirements while maintaining a least-burdensome transition to regulatory compliance.

As utilized by the FDA for regulating SaMD, a proactive culture of quality recognizes the same practices familiar to research scientists well-versed in informatics, translational science, and AI/ML framework development. For example, the FDA has published good machine learning practices (GMLP)21 that enumerate its expectations across the entire AI/ML life cycle grounded in emerging AI/ML science. The FDAs regulatory framework allows for a stepwise product realization approach that HCOs can follow to augment this culture shift. This stepwise approach implements ethical and quality principles by design into the AI product lifecycle, fostering downstream compliance while allowing development teams to innovate and continuously improve and refine their products. Using this approach allows for freedom to iterate at early research stages. As the product evolves, the team is prepared for the next stage, where prospectively planned development, risk management, and industry-standard design controls are initiated. At this stage, the model becomes a product, incorporating all the software and functionality needed for the model to work as intended in its clinical setting. QMS procedures outline practices, and the records generated during this stage create the level of evidence expected by industry and regulators22,23. HCOs may either maintain dedicated quality teams responsible for conducting testing or employ alternative structures designed to carry out independent reviews and audits.

Upon deployment, QMS rigor increases again to account for standardized post-deployment monitoring and change management practices embedded in QMS procedures (Fig. 2). By increasing formal QMS consistency as the AI/ML gets closer to clinical deployment, the QMS can minimize disruption to current research practices and embolden HCO scientists with a clear pathway as they continue to prove their software safe, effective, and ethical for clinical deployment.

Staged process for applying increasing regulatory rigor throughout product realization.

See the rest here:

Implementing quality management systems to close the AI ... - Nature.com

Read More..

How Popular Are Generative AI Apps? – Government Technology

The Internet has been abuzz with stories surrounding the future of OpenAI (the company that created ChatGPT), CEO Sam Altmans future (as of this writing, he had been reinstated to the position), Microsofts hiring plans and related stories.

But regardless of how those important questions get answered, it is important to understand just how big this generative artificial intelligence (GenAI) market has become as we head into 2024.

The Verge recently highlighted the growth of ChatGPT: In less than a year, its hit 100 million weekly users, and over 2 million developers are currently building on the companys API, including the majority of Fortune 500 companies.

A newly released study by Writerbuddy examined this question in detail. Heres what they found about the top 10 apps. (Note that the full report looks at 50 apps, but I am just listing the top 10.)

1. ChatGPT:Launched in November 2022, it quickly dominated with 14.6 billion visits over 10 months, averaging 1.5 billion monthly.

2. Character.AI: Introduced in September 2022, it captivated users, accumulating 3.8 billion visits and surging by 463.4 million within a year.

4. Janitor AI: A unique chatbot from May 2023, it experienced a quick rise with 192.4 million visits in four months.

5. Perplexity AI: Established by ex-Google staff in August 2022, it progressed rapidly, drawing 134.3 million users in nine months.

6. Civitai: An AI art hub since November 2022, it climbed to 177.2 million visits within 10 months.

7. Leonardo AI: From December 2022, this visual asset tool became a creator's choice, gathering 101.6 million visits over nine months.

8. ElevenLabs: With advanced voice AI since October 2022, it attracted 88.6 million users in 11 months.

9. CapCut: An established video tool from April 2020, it consistently pulled 203.8 million visits in the past year.

10. Cutout.Pro: An AI content tool from 2018, it maintained its grip with 133.5 million users over the year.

1. Craiyon: Between September 2022 and August 2023, Craiyon.com, an AI image generator, lost 15 million visits, possibly due to growing competition. They averaged 10.7 million monthly, with a monthly drop of 1.4 million.

2. Midjourney: Another AI image tool, Midjourney, launched in July 2022, faced an 8.66 million visitor dip over the year. Despite a strong 41.7 million monthly average, they saw a monthly decline of 787,700.

3. QuillBot: An established AI writing tool, QuillBot lost 5 million visits in 12 months, perhaps due to rising chatbot rivals. Still, they held a robust 94.6 million monthly traffic, with a 461,400 monthly drop.

4. Jasper: An early AI writing platform, Jasper's traffic decreased by 1.27 million visits over the year, possibly affected by giants like ChatGPT. They averaged 7.9 million monthly and faced a 115,100 monthly loss.

5. Zyro: An AI-enhanced website builder, Zyro saw a 1.09 million-visitor decline in 12 months, potentially due to growing AI feature competitors. They recorded a 5 million monthly average and a decline of 99,400 each month.

This video shows some of the way that GenAI tools are changing enterprises' business plans:

The rest is here:

How Popular Are Generative AI Apps? - Government Technology

Read More..

Fovia Ai to Showcase Optimized AI Visualization at IAIP Exhibit … – PR Newswire

Fovia Ai Provides Efficient AI Results in One Unified Viewer

CHICAGO, Nov. 26, 2023 /PRNewswire/ -- Fovia Ai, Inc., a subsidiary of Fovia, Inc., a world leader in advanced visualization for over two decades and a preeminent provider of zero-footprint, cloud-based imaging SDKs, today announced that it will be showcasing AI interaction and visualization of AI results from multiple vendors and algorithms displayed efficiently in one consistent user interface, in the Imaging Artificial Intelligence in Practice (IAIP) demonstration November 26 November 29 at the 109th Scientific Assembly and Annual Meeting of the Radiology Society of North America (RSNA 2023) at McCormick Place in Chicago.

RSNA attendees visiting the IAIP exhibit will be able to explore AI integrations from 20 exhibitors with 28 products that are based on real-world clinical scenarios as well as see live demonstrations of Fovia Ai software integrated with vendors including GE Healthcare, Hyperfine, Laurel Bridge, Milvue, Nuance, Nvidia, PaxeraHealth, Qure.ai, Qvera, Siemens Healthineers, Smart Reporting, Telerad Tech and Visage Imaging. The interactive exhibit provides attendees access to emerging AI technologies, demonstrates the interoperability standards needed to integrate AI into the workflow of diagnostic radiology, and highlights AI-driven products that remove barriers to clinical adoption.

"For the fourth year in a row, Fovia Ai is thrilled to participate in the IAIP demo and to provide optimized AI visualization for various industry partners. Acting as a gatekeeper for AI findings, our technology works conjointly with the latest interoperability standards to make physician-validated findings readily available to AI orchestrators, PACS, and reporting systems. At this year's RSNA we are also demonstrating the efficiency of visualizing and interacting with AI results in one consistent user interface regardless of the algorithm origin, thereby streamlining AI-supported image interpretation," stated Fovia Ai's Chief Technology Officer, Kevin Kreeger, Ph.D. "We are pleased with the incredible forward-thinking environment the collaborative IAIP team provides every year."

Conveniently located adjacent to the IAIP demonstration, attendees visiting the Fovia Ai booth (#4161), can:

To learn more about Fovia and Fovia Ai's complete product suites or arrange a demonstration at the 109th Scientific Assembly and Annual Meeting of the Radiological Society of North America, November 26November 29,contact us.

About Fovia Ai

Fovia Ai, Inc. is a subsidiary of Fovia, Inc., a world leader in advanced visualization, a preeminent provider of cloud-based, zero-footprint imaging SDKs, and the developer of High Definition Volume Rendering, XStream HDVR, F.A.S.T. RapidPrint and TruRender. Fovia Ai's flagship products, F.A.S.T. aiCockpit and F.A.S.T. AI SDK, enable radiologists and clinicians to efficiently access AI results directly within their existing workflows from any PACS, worklist, dictation software or hospital system. Complementary products in Fovia Ai's product suite include F.A.S.T. AI Annotation, F.A.S.T. AI Validation, F.A.S.T. AI Workflows, F.A.S.T. Interactive AI and F.A.S.T. Interactive Segmentation, collectively providing tools to annotate, validate, modify, accept/reject, interact with and segment data. The flexible architecture of Fovia Ai's product suite and Fovia's 20+ years of radiology integration experience facilitate seamless integrations with a variety of partners, platforms, processors and operating systems.

For additional information and to learn more about commercial, academic or research licensing, visit fovia.ai or fovia.com.

IMPORTANT REGULATORY NOTICE: The applications mentioned herein are for investigational use only at this time.

SOURCE Fovia Ai, Inc.

Excerpt from:

Fovia Ai to Showcase Optimized AI Visualization at IAIP Exhibit ... - PR Newswire

Read More..

Putin: Russia will develop new AI technology to counter West – Business Insider

Russian President Vladimir Putin unveiled Russia's new plan to take on the West in developing AI. Vladimir Smirnov, Sputnik, Kremlin Pool Photo via AP

Russia is staking its claim in the AI arms race.

At the Artificial Intelligence Journey conference in Moscow on Friday, Russian President Vladimir Putin announced plans for a new national strategy for AI development to counter Western influence over the powerful technology.

"Our domestic models of artificial intelligence must reflect the entire wealth and diversity of world culture, the heritage, knowledge, and wisdom of all civilizations," Putin said.

Putin, who outlined the goals of the new strategy in broad terms, said that Russia will intensify its research into generative AI and large language models.

To achieve that, the country would scale up its supercomputing power and improve top-level AI education. Russia would also work to change laws and boost international cooperation to achieve its goals, Putin said.

Putin lamented that existing generative AI models work in "selective" or "biased" ways potentially ignoring Russian culture because they're often trained to solve tasks using English-only data sets, or data sets that are "convenient" or "favorable" to the developers of these models.

As a result, an algorithm can "tell the machine that Russia, our culture, science, music, literature simply does not exist," he said, leading to "a kind of abolition in the digital space."

English-speaking countries right now dominate AI research. The United States and the United Kingdom claim the top spots in a ranking of the highest number of significant machine learning systems, according to Stanford's Institute for Human-Centered Artificial Intelligence (HAI).

The United States has 16 significant systems. The United Kingdom has eight. Russia has just one, by HAI's account. Similarly, close to 300 authors of these systems come from the United States. Another 140 are from the United Kingdom. Only three come from Russia.

Concerns about the potential dangers AI could pose to humanity have divided even the most dedicated AI researchers, and some have publicly said the technology in the wrong hands would be problematic.

Geoffrey Hinton, the British-Canadian AI researcher named a "godfather of AI," for instance, has said he's worried about "bad actors" like Putin using the AI tools he's creating.

Putin, however, says that the Western monopoly over AI is "unacceptable, dangerous and inadmissible."

Loading...

Read more:

Putin: Russia will develop new AI technology to counter West - Business Insider

Read More..

Capital Law: Decentralization must be associated with supervision … – VietNamNet

On November 10, the National Assembly listened to the presentation by Minister of Justice Le Thanh Long about the amendment to the 2012 Capital Law.

The draft amended law includes seven chapters with 59 articles (3 chapters and 32 articles more than the 2012 law) and mentions nine major policies, including the restructuring of the capital citys apparatus to make it operate in a streamlined, professional, and effective way.

Deputy director of the Hanoi Department of Justice Nguyen Cong Anh talked to VietNamNet about the draft law.

What is your assessment about the impact of the Capital Law enforcement on Hanoi?

After 10 years of enforcement, the 2012 Capital Law has had a positive impact on the citys socio-economic development. The law has allowed the city to establish legal tools for building and managing capital city planning, thereby contributing to a more orderly and neat urban landscape. The city has a higher authority to set penalties on the violations of construction and land laws.

Hanoi has gained achievements in developing culture and the cultural and spiritual life of the citys residents has been improved, while cultural and historical heritage sites have been better preserved and embellished.

Hanoi continues to lead the country in developing education and training, and improving the quality of human resources. The State's social security policies are implemented properly, sufficiently and promptly; Hanoi has issued specific and unique policies on social welfare; and ensured political stability, national defense, security and social order.

The citys growth rate has been increasing year after year, actively contributing to the countrys GDP growth. Hanois GRDP (gross regional domestic product) in 2020 reached VND1.02 quadrillion, while the GRDP per capita was $5,325, or 1.92 times higher than the countrys average level.

In 2016-2020, Hanoi made up more than 16 percent of GDP, 18.5 percent of the state budget collections, 20 percent of revenue from domestic sources and 8.6 percent of the countrys total export turnover.

Which problems do you think need to be solved when compiling the amended Capital Law?

In my opinion, many provisions of the 2012 Capital Law are general, not clear enough, and difficult to be applied. After the 2012 Capital Law took effect, many other laws were enacted, including the Land Law, Housing Law, Residence Law and others, which contain provisions overlapping with the Capital Law.

The relocation and post-relocation land fund management have not been strictly implemented according to the Prime Ministers Decision 130/QD-TTg dated January 23, 2015. The application of standards and regulations in planning a number of public works in traffic, construction, water supply and drainage in the inner city remains unsystematic.

Environmental regulations on wastewater, emissions and noise in the capital have been issued, but the application is ineffective. Policies on science and technology development have been mentioned in the law, but they have not been effectively implemented.

There are many issues not mentioned in the 2012 law, including specificity in the organizational structure of the capitals government, and agricultural and rural development policies. The law doesnt contain provisions about a management mechanism which coordinates cooperation among cities/provinces within the capital area.

The Steering Committee for Planning and Investment in the Hanoi Capital Region has been established but it operates ineffectively.

Regarding transportation, the current land bank reserved for transport development is still too small compared to requirements. Meanwhile, the difficulties in expanding transport routes in the inner city area, plus the high costs of compensation for site clearance to expand roads, have put pressure on the citys budget.

Which policies do you think the amended Capital Law needs to create a specific and outstanding mechanism for Hanois development?

I believe that one of the extremely important issues that needs changes in the amended law is decentralization.

The new law needs to concretize specific mechanisms and policies for prompt application. Regarding legislative authorization, it is necessary to decentralize this wok to competent agencies in accordance with current laws to ensure feasibility and uniformity. While applying decentralization, it is necessary to clarify the responsibilities Hanoi authorities have to take, as well as the mechanism to inspect and supervise law enforcement.

Hanoi needs to be given specific mechanisms and policies to optimize the capital citys resources, and pay attention to attracting talents.

Quang Phong

See original here:

Capital Law: Decentralization must be associated with supervision ... - VietNamNet

Read More..

Wikipedia founder Jimmy Wales says AI is a mess now but can … – Euronews

Wikipedias founder Jimmy Wales tells Euronews Next about the terrible early stage of ChatGPT, the lesson for OpenAI and about his open-source social media platform.

ChatGPT, the wildly popular generative artificial intelligence (AI) tool from OpenAI, is currently a mess when its used to write articles on Wikipedia, the platforms founder Jimmy Wales tells Euronews Next.

A Wikipedia article written today with ChatGPT-4 is terrible and doesn't work at all, he says, because it really misses out on a lot and it get things wrong, and it gets things wrong in a plausible way and it makes up sources and it's a mess.

He even goes as far as to predict that superhuman AI could take at least 50 years to achieve.

But while he believes it is possible for AI to surpass humans in the distant future, it is more likely that AI tools will continue to support intellectual activities, despite being in an early phase at the moment, he says.

The most valuable start-up in the United States, OpenAI catapulted onto the scene with its chatbot ChatGPT last year.

The technology takes instruction and questions and answers them with an eerily human-like response based on sources it gathers online. It can be used for writing essays, song lyrics, or even health advice, though it can often get the information wrong, known as hallucinating.

But even the most powerful chatbot AI start-up was thrown into chaos with the ousting of its CEO and co-founder Sam Altman last week and then his rehiring just days later after employees threatened the board they would quit en masse.

Wales said it is worrisome that this occurred for such an influential company but that it will probably pass as if nothing happened.

If anything, he said the company will likely get its house in order and that it is a good lesson to start-ups of all kinds that you really do have to think even at a very early stage about governance, about the stability of decision making.

Despite his criticism of current generative AI models, Wales has not ruled out AI being used for Wikipedia.

He said if a tool were built to find mistakes in a Wikipedia article by comparing it to the sources it uses, it could help to iron out inaccuracies.

He even told Euronews Next that he would consider a Wikipedia venture with an open-source AI company that is freely usable to match Wikipedias principles, but clarified there is nothing specific in the works.

However, he says that this would be a decision that would not be taken lightly.

Most businesses, not just charities like us, would say you have to be really, really careful if you're going to put at the heart of your business a technology that's controlled by someone else because if they go in a different direction, your whole business may be at risk, he said.

He would therefore think carefully about any partnerships but added that he was open to pilot programmes and testing models.

Wikipedia is still essential to generative AI as it sources information published online to produce content. Therefore, the online encyclopedia must be accurate and not produce bias, something that both AI and Wikipedia have been accused of.

To create a gender balance and combat disinformation, Wikipedia has its own army of Wikipedians who are mostly male volunteer editors. Wales said that Wikipedians can see the difference between fake websites and can easily tell if the text was written by a human.

But bias is much harder to tackle as it can be historical; for instance, there were fewer female scientists in the 19th century and not much was written about them at the time, meaning Wikipedians cannot write that much about them. Or it can be unconscious bias, whereby a 28-year-old tech nerd Wikipedian may have different interests compared to a 55-year-old mother.

Diversity is key in trying to combat bias, something the company is striving to achieve.

It's a real problem and obviously we feel a heavy responsibility to the extent that the world depends on Wikipedia and AI models depend on Wikipedia, he said.

We don't want to teach robots to be biased, so we want to get it right as at the sort of the human heart of the whole thing.

Disinformation and online hate have been a grievance for Wales and one that has led to blows with the X (formerly Twitter) boss Elon Musk, who offered $1 billion (915 million) for Wikipedia to change its name to Dickipedia.

Wales has never responded to Musks comment as he said it did not need an answer. Everybody looks at that and says, Are you 12-years-old, Elon? he told Euronews Next.

The $1 billion offer came after Wales criticised Musk for laying off moderation staff at X, which the Wikipedia chief said had increased all kinds of serious racism and toxic behaviour on the platform and is likely to affect advertising revenue.

You can't both run a toxic platform and expect advertisers to give you money, so that might change things, Wales said, adding that he and Musk are friendly and do text and that the exchanges are pleasant.

He said he still uses X but has deleted the app from his phone, which has made his life much better as he can do other things that are less toxic.

Wales has launched his own social network platform which he says has a completely different approach to X.

Last week at Web Summit, Wales announced the beta version of his project called Trust Cafe, a new online community he says will give power to its most trusted members.

First revealed in September, he describes it as his experiment in a friendly and open-source social media platform that he is not taking too seriously as a business venture.

He called it a cross between X and Reddit, where you can discuss certain topics but are not limited to a certain number of characters and there is not one sole owner of a discussion.

Reddit is both fantastic and horrible. Whereas we're really pursuing a model that's much more the governance is across everything, said Wales.

While he admits online hate and toxic behaviour will always occur within some users, he is optimistic.

If you've got basically sensible people who have enough power, you'll get a basically sensible platform and there's always going to be somebody crazy. There's always going to be some debate that turns a little ugly. That's just human nature, said Wales.

But as long as you can keep the main thrust of it in a healthy channel, then you can have like a really interesting kind of open platform where people can really genuinely engage with ideas.

Original post:

Wikipedia founder Jimmy Wales says AI is a mess now but can ... - Euronews

Read More..

Nicolas Cage on Memes, Myths, and Why He Thinks AI Is a … – WIRED

Nicolas Cage knows hes a meme. Hes not happy about it. After making the mistake of googling himself a few years back, the charismatic actor discovered that his big on-screen performances had been translated into single-frame quips and supercuts, takenlike all memes, reallyout of context, played for lolz, and in a manner that, frankly, makes Cage seem like a graduate from the Jim Carrey school of rubber-faced acting.

Something like Nick Cage loses his shit, where they cherry-pick meltdowns from different movies Id made over the years, he says. I get that its all done for laughs, and in that context it is funny, but at the same time, theres no regard to how the character got there. Theres no Act One, theres no Act Two.

This, Cage says, is not why he got into making movies. Back in the 1980s, when he was showing up in Fast Times at Ridgemont High and as the romantic lead in Valley Girl, there was no internet, no one turning him into a TikTok template. So, as Ive watched these memes grow exponentially and get turned into T-shirts and You dont say? and all that stuff, Cage says, Ive just thought, Wow, I dont know how I should feel about this, because its made me kind of frustrated and confused.

Thats part of the reason Cage signed on to do his latest movie, the A24 drama Dream Scenario, in which he plays Paul Matthews, a downtrodden university professor who suddenly starts to appear in the dreams of millions of people around the world. Directed by Sick of Myselfs Kristoffer Borgli, the film is a clever look at the trappings of instantaneous fame and at what it looks like when someones fame becomes bigger than they might be themselvessomething Cage, who actually changed his name and leaned into a more bombastic persona early in his career, knows a little something about.

To mark Dream Scenarios release, WIRED talked to Cage about where hes at with his meme-ification these days, his dislike of social media, and why hes going to make damn sure that no one can make an AI-generated Nick Cage after he shuffles off this mortal coil.

WIRED: Over the course of the movie, Paul struggles with who he thinks he is and who the world thinks he is, and how thats constantly shifting around him. Is that something youve had to deal with over the course of your career in terms of Nick Cage, Hollywood actor versus Nicolas Coppola, father and human being?

View post:

Nicolas Cage on Memes, Myths, and Why He Thinks AI Is a ... - WIRED

Read More..

The Sam Altman saga reveals the need for AI transparency – New York Post

Opinion

By Alex Tapscott

Published Nov. 25, 2023, 12:00 p.m. ET

Artificial Intelligence titan OpenAI fired and them rehired its charismatic CEO Sam Altman this past week. REUTERS

Its been a roller-coaster week for Sam Altman, the past, now present andhopefully future CEO of artificial-intelligence giant OpenAI.

Last weekend, the companys board shocked the technology world by firing Altman for no apparent reason.

The move left Microsoft OpenAIs largest investor reeling.

After failing to have him reinstated, Microsoft CEO Satya Nadella announced that Altman and his co-founder Greg Brockman were jumping ship to lead Microsofts new AI research arm.

Next up was a near company-wide revolt, as most of OpenAIs 800 employees made clear they wanted Altman back or were ready to follow him to Microsoft.

And so by midweek Altman had been reinstated at OpenAI, accompanied by a new board of directors, which includes former Harvard President Lawrence Summers.

The entire affair has been stealthy, both in speed and possible subterfuge.

What were the real reasons for dismissing Altman a hugely capable leader who, among other things, was spearheading a funding round valuing OpenAI at $87 billion?

Thats probably a question for ChatGPT.

OpenAI began life as a non-profit tasked with advancing responsible AI research but has more recently morphed into a typical high-growth tech company.

Some on the board, including the companys Chief Scientist and an AI ethicist, worried that Altman was breaking away from the companys founding principles of altruism.

They feared Altmans bottom-line focus and new AI products reaching near-sentient status could put humanity at risk.

The Altman-OpenAI saga has left many industry observers with a Silicon Valley-style case of whiplash.

Theres also a fair measure of uncertainty around this next-gen OpenAI both in terms of its ongoing stability and its approach to the future growth of AI as a whole.

Will this weeks backroom machinations further edify existing tech giants, like Microsoft?

Or will fast-running start-ups like OpenAI remain as the steward of AIs future?

Will governments throttle AIs growth through onerous new rules?

Or will so-called doomer AI skeptics turn the public against AI before it even gets fully going?

The truth is that none of these choices address AIs biggest concern, the murkiness over how to train, build, and ship new AI products responsibly.

And fixing this begins with doubling-down on openness and transparency.

Indeed, Microsofts Nadella called the naming of a new OpenAI board as a key first step toward well-informed and effective governance.

For AI to reach its potential safely at scale, we need transparency improvements at every step.

We need to decentralize AIs existing framework so that its governed by many rather than a few.

Embracing decentralized decision-making reduces any single point of failure such as a disgruntled board, a charismatic CEO or authoritarian regime.

As Walter Isaacson wrote, Innovation occurs when ripe seeds fall on fertile ground.

In other words, the AI technology stack is fertile; to cultivate it, we must plant new and more inclusive ideas.

Lets start at the bottom of that stack, with hardware.

Today, three companies Amazon Web Services, Microsoft, and Google control three-fourths of the cloud computing market storing all that AI data.

One company, NVIDIA, manufactures most of the chips.

Decentralization would allow smaller, user-owned networks to offset this hegemony, while adding much-needed capacity to the industry.

Altman was in the Middle East raising money for a new hardware venture that would rival NVIDIA when he was fired.

To dislodge the big players entirely, he should embrace a decentralized model instead.

Next up are so-called foundation models, the AI brains that generate language, make art and write code (and lame jokes).

Companies guard these models with little oversight or transparency.

OpenAIs models, for instance, are closed tight to public scrutiny. User-owned networks with multi-stakeholder input would be better than Microsoft or OpenAI having complete foundational control which is where we are headed.

Equally important is actual data.

To train an AI foundation model, we need lots of data.

Companies like Microsoft and Amazon have grown rich and powerful amassing mountains of user data; thats one reason OpenAI partnered with Microsoft to begin with.

Yet, users dont know how these AI firms are exploiting their personal data to train their models.

Decentralized data marketplaces such as Ocean Protocol allow individuals and organizations to securely (and accurately) share their data with AI developers.

The data silos of tech giants become less important.

Finally, at the top of the stack are applications.

Imagine a chatbot for K12 students that acts as their personal tutor, fitness instructor and guidance counselor.

We want transparency from AI products that talk to our children and everyone else.

We also want some say in what these apps collect and store about us, how they use and monetize this information, and when they destroy it.

OpenAI currently offers little of any.

AI could alter humanitys fate profoundly. But so far, just a select few Altman and Nadella among them are determining its future behind closed doors.

They claim to represent the interests of all of humanity, but no one really knows.

Neither do we know why OpenAI initially sent Altman packing last week.

But a lack of consistent candidness a k a transparency was cited by his detractors.

Back where it all began, Altman will likely emerge stronger than ever.

Now he must use that strength to advance the core openness OpenAI has always claimed to hold dear.

Alex Tapscott is the author of Web3: Charting the Internets Next Economic and Cultural Frontier (Harper Collins, out now) and a portfolio manager at Ninepoint Partners.

Load more...

https://nypost.com/2023/11/25/opinion/the-sam-altman-saga-reveals-the-need-for-ai-transparency/?utm_source=url_sitebuttons&utm_medium=site%20buttons&utm_campaign=site%20buttons

Read more:

The Sam Altman saga reveals the need for AI transparency - New York Post

Read More..

What the changes to Alberta’s healthcare delivery system mean for … – Canada Immigration News

Earlier this month, Alberta Premier Danielle Smith announced that the existing Alberta Health Services (AHS) system in the province will be divided into four separate entities in an effort to improve access to care.

In a press conference at the beginning of November, Smith reiterated that this plan would organize Albertas healthcare delivery system by function, aiming to reduce emergency room and surgery wait times, improve access to innovative treatments and recruit more staff.

More: Albertas plan for a new healthcare delivery system

Discover if You Are Eligible for Canadian Immigration

Specifically, the decentralization of healthcare in the province will see organizations deliver health services in primary care, acute care, continuing care and mental health/addiction care.

Permanent resident (PR) landing data from 2022 shows that nearly 50,000 new PRs (49,460) landed in Alberta last year.

As the fourth-largest immigrant destination (by number of PRs) in all of Canada for 2022, the anticipated changes to healthcare delivery in Alberta would impact a significant number of Canadian newcomers. In addition, this impact will likely be felt in a variety of ways, from healthcare access (as intended by the reform) to employment prospects.

Employment

As stated by Albertas Premier, one of the goals related to this reform is the recruitment of additional staff in the healthcare industry. This will likely mean that prospective newcomers to Canada will have more opportunities to find employment in this province.

Evidence of this potential can be found in many places, including the recent introduction of category-based Express Entry draws.

These draws, which divert from the typical focus of standard Express Entry draws on a candidates Comprehensive Ranking System (CRS) score, instead prioritize the selection of immigration candidates based on selected proficiencies and past work experience.

In 2023, Canadas immigration department selected the following six categories as areas of focus for category-based draws. This years categories prioritize immigration candidates with:

or work experience in one of the following five industries:

As evidenced by the inclusion of healthcare occupations in the 2023 list of categories designated by Immigration, Refugees and Citizenship Canada (IRCC), Canada needs more skilled workers in this industry. Therefore, with the reforms coming to Albertas healthcare system, prospective newcomers could see a heightened level of opportunity to immigrate to, work and live in the province.

Alberta, beyond its position as last years fourth most popular immigrant destination, is also a province experiencing a general boom in population growth over recent years.

In fact, according to Statistics Canada data referenced in a CTV story from September, Albertas population on July 1 this year was 4.7 million, 4.1 percent higher than [July 1] last year due to an increase of 184,400 people.

The same story also noted that Albertas population has recently boomed more than others, [outpacing] the national average by 1.1 percentage points.

As a result of this population growth, most of which can be attributed to international migration*, Alberta has seen a growth in its labour force at its fastest annual pace since 2007 excluding the COVID recovery period.

*International migration has accounted for 61% of the total provincial population increase (112,562 people) in Alberta

In the words of ATB Financial Vice President and Chief Economist Mark Parsons, migration [to Alberta from other parts of the world provides] a steady source of people to fill some of these jobs.

Note: CTVs story notes that the one other primary driving factor in Albertas population boom is migration from other parts of Canada (mostly Ontario and British Columbia), which accounts for roughly 31% (56,245 people) of the provinces population growth.

For foreign nationals considering Alberta as their destination, the province has pathways available to all types of newcomers, including temporary foreign workers and entrepreneurs.

As an example, Albertas Provincial Nominee Program (PNP) is called the Alberta Advantage Immigration Program (AAIP).

This provincial immigration system includes a variety of streams for workers and entrepreneurs, each of which has different requirements with respect to such things as, for example, the need for an existing job offer prior to applying.

More: This dedicated webpage can provide readers with more information about the AAIP

The following is a sample list of AAIP streams:

For workers

For entrepreneurs

Click here for more information about landing, settling and life in Alberta, including information about working across the province and healthcare among a variety of other topics.

Discover if You Are Eligible for Canadian Immigration

Read the original post:

What the changes to Alberta's healthcare delivery system mean for ... - Canada Immigration News

Read More..