<?xml version="1.0" encoding="utf-8"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
		<atom:link href="http://asianheritagesociety.org/blog/x5feed.php" rel="self" type="application/rss+xml" />
		<title><![CDATA[Asian Heritage Society, San Diego-Ai News]]></title>
		<link>http://asianheritagesociety.org/blog/</link>
		<description><![CDATA[Stay informed with the latest updates on Asian heritage events and initiatives in San Diego. Discover cultural celebrations, community activities, and insightful news articles covering the rich tapestry of Asian heritage in the region. Explore diverse perspectives and stay connected with the vibrant Asian community in San Diego through our curated AI news platform.]]></description>
		<language>EN</language>
		<lastBuildDate>Sun, 09 Jun 2024 00:53:00 -0400</lastBuildDate>
		<generator>Incomedia WebSite X5 Evo</generator>
		<item>
			<title><![CDATA[AI Will Be Crucial for Today’s Students, But Our Public Schools May be Ambivalent]]></title>
			<author><![CDATA[Leonard Novarro and Rosalynn Carmen]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000083"><div>“Learning how AI works and understanding its implications for our lives is at least as important as learning to read and write.” — <em>Artificial Intelligence/ 101 Things You Must Know Today About Our Future,</em> by Lasse Rouhiainen</div><div bis_skin_checked="1"><figure><span class="fs15lh1-5 cf1 ff1"></span></figure></div><div>U.S. Highway 101 that runs through Palo Alto is a highway and a metaphor for the economic and social division in this heart of Silicon Valley. </div><div>While the west side is the birthplace of companies like Apple, Google and Facebook and home to distinguished educational institutions such as Stanford University, the east side of the freeway harbors one of the poorer and educationally disadvantaged communities in the United States. And while public schools on the west side of US 101 have the resources to boost high test scores and opportunities for students, the east side faces the constant challenge of not enough money, resources and academic achievement.</div><div><br></div><div>“When you peel back the onion, you can see that our schools have been failing us for decades and have been losing ground every year,” says Peter Sibley, CEO of <span class="imUl cf2">Journeys Map</span>, a San Diego firm that matches student aptitude with potential careers. “The goal line is definitely moving,” added Sibley, somewhat critical of the overall acceptance of artificial intelligence in many public schools.</div><div><br></div><div>While private education seems to be working hard to integrate this relatively new technological tool, public schools, even in San Diego, considered a leader in artificial intelligence, may be lagging. <span class="imUl cf2">The Classroom of the Future Foundation</span>, partnering with the San Diego County Office of Education, is working hard to catch up.</div><div><br></div><div><aside></aside></div><div>For the first time in the eight-year history of the foundation’s future career summit, held in February, AI commanded the center of attention. One of the speakers taking center stage in that discussion was Dr. Patrick Gittisriboongul, assistant superintendent of technology and innovation for the Lynwood Unified School District in Los Angeles,</div><div><br></div><div>“Artificial Intelligence is a game changer,” said Gittisriboongul, contending that 80% of the work force in the near future will have between 20% and 50% of their tasks performed by AI. That might include web design, tax preparation, engineering, administrative and secretarial work, data managing, mathematician and customer service, according to Gittisriboongul, former assistant superintendent of innovation for the San Diego County Office of Education.</div><div><br></div><div>At the same time, he contends, only 18% to 28% of public schools in the U.S. have any strategy to deal with an AI future. Gittisriboongul is leading a task force in Lynwood to change that.</div><div><br></div><div>“Every organization, no matter how small, needs to develop a strategy now,” he said. “For example, an art teacher might ask: ‘Is it ok to generate art?’ and ‘Would you consider that an original piece of art?’ If we want to prepare kids where AI is part of our society, AI has to be part of that discussion.”</div><div><br></div><div><div>Currently, some 8.5 million high tech-and AI related jobs in the U.S. are not being filled despite mind-boggling salaries. In San Diego, for example, QUALCOMM and AppFolio are, respectively, looking for IT data scientists to earn between $180,000 to $270,000 &nbsp;a year in QUALCOMM’S case and $123,000 to $185,000 a year for AppFolio. Other openings include software and technical director, $155,000 to $277,530; software engineers, from $95,000 to $243,000, &nbsp;and analytics engineers, $104,000 to $156,000.</div><div><br></div><div><aside></aside></div><div>Elsewhere in the country, openings call for senior machine learning engineers, $160,000 to $176,000 a year; graphics software engineers, $550 a day; backend software engineers, $180,000 a year; senior data scientists, $127,300 a year; and business systems analysts, $125,000 to $135,000 a year. </div><div><br></div><div>The National Institute of Science Initiative on Cyber Education has even identified 52 high-tech and AI-related roles that are not even listed yet by U.S. Department of Labor codes.</div><div><br></div><div>“Some require about eight months of education to get a job that would probably be comparable to a college graduate’s pay. The outlook for jobs is good,” said Sibley, who refers to them as neither white collar or blue collar but “blue-white collar” jobs.</div><div><br></div><div>Never before has such a vast array of power come together at the same time with the ability to disrupt and yet enhance the distribution of knowledge — from QUALCOMM’S superchip Snapdragon exponentially increasing the efficiency and power of mobile devices across varied platforms, to the speed of data processing from everywhere in nanoseconds, the increased` sophistication of algorithms creating the ability to cloud share vast pools of data from everywhere, the proliferation of global data centers like Amazon Web Services serving as holding patterns in the dissemination of knowledge and the vast infusion of capital from government and private industry.</div><div><br></div><div><span class="fs12lh1-5"><b>All this is something that can’t be ignored.</b></span></div><div><br></div><div><aside></aside></div><div>In the end, most of that burden of acceptance rests on the individual teacher. Yet, how each teacher perceives and receives artificial intelligence will depend on familiarity with the technology and an understanding of the impact it will have. The reluctance by some thus far may be attributed to a fear of losing control in the classroom — or, perhaps, something deeper. Sibley reached &nbsp;back in history with the following anecdotal analysis: </div><div><br></div><div>The invention of the printing press by Johanne Gutenberg in 1440, for the first time, made the wide dissemination of information possible. But it took 400 years for that to happen — 200 years for the first newspaper to reach a general public in Germany and another 200 years for the modern newspaper to make all the news that’s “fit to print” available to everyone.</div><div><br></div><div>That should have happened sooner, but, Sibley contends, “The powers that be were trying to control that. They did not want easy access to knowledge and did everything to control that and from democratizing different forms of education. Think of the hundreds of years it took for (the invention of the printing press) to have a global impact. I would posit that AI, by some data I have seen, will be a hundred times more pervasive.”</div><div><br></div><div>The ability to accumulate, harness and transfer such power is greater than ever. At the same time, while some students are being told “You can’t use it,” others are told “You must use it.”</div><div><br></div><div>Perhaps ambivalence is the biggest hurdle yet to overcome.</div><div><aside></aside></div><div><em>Leonard Novarro is vice president of the <span class="imUl cf2"><a href="http://asianheritagesociety.org/index.php" class="imCssLink" onclick="return x5engine.utils.location('http://asianheritagesociety.org/index.php', null, false)">Asian Heritage Society</a></span> and author of <a href="http://wordslingerbook.com/" target="_blank" class="imCssLink"><span class="imUl cf2">WORDSLINGER: The Life and Times of a Newspaper Junkie</span>.</a> Rosalynn Carmen is president of the society and an AWS Certified Machine Learning Specialty and AWS Certified Dev Ops Engineer Professional.</em></div></div></div>]]></description>
			<pubDate>Sun, 09 Jun 2024 04:53:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/man-1839500_1280_thumb.webp" length="36518" type="image/webp" />
			<link>http://asianheritagesociety.org/blog/?ai-will-be-crucial-for-today-s-students,-but-our-public-schools-may-be-ambivalent</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000083</guid>
		</item>
		<item>
			<title><![CDATA[Survey Finds Small Businesses See Artificial Intelligence as Tool for Growth]]></title>
			<author><![CDATA[Times of San Diego]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000082"><div><span class="fs12lh1-5">Despite the headlines touting artificial intelligence (AI) as human replacement, most small business owners and employees surveyed by <span class="cf1">Cox Business</span> view the technology as a tool to strengthen and grow their teams.</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">Cox Business surveyed small business owners and employees to better understand their sentiment toward AI and how they use the technology in the workplace. Among those surveyed, 52% of small business owners said AI enables them to increase or retain employees, while 65% of small business employees said the same.</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">According to the survey, small business owners are increasing their AI investment to grow customer service, marketing and sales in 2024.</span></div><div><span class="fs12lh1-5">Enhancing the Customer Experience Both small business owners and employees feel that they have a good grasp on what AI is and are comfortable using the tools within their organization:</span></div><div><span class="fs12lh1-5"><br></span></div><div><ul><li><span class="fs12lh1-5 cf2">85% of owners are somewhat to very comfortable using AI tools in their business</span></li><li><span class="fs12lh1-5 cf2">75% of employees are somewhat to very comfortable using AI tools in their business</span></li></ul></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">Small business owners (53%) report AI had a positive impact on customer experience in 2023 and plan to use AI to support the customer experience in several ways this year:</span></div><div><span class="fs12lh1-5"><br></span></div><div><aside></aside></div><div><ul><li><span class="fs12lh1-5 cf2">36%: Online order product/service recommendations</span></li><li><span class="fs12lh1-5 cf2">35%: Online order placement</span></li><li><span class="fs12lh1-5 cf2">35%: Website live chatbot</span></li><li><span class="fs12lh1-5 cf2">33%: Customer service calls</span></li></ul></div><div><span class="fs12lh1-5"><br></span></div><div><b class="fs12lh1-5">AI Investment</b></div><div><span class="fs12lh1-5">One-third of small business owners invested in AI for their company last year and 53% plan to invest in AI even more in 2024.</span></div><div><span class="fs12lh1-5"><br></span></div><div><div><span class="fs12lh1-5">“The data clearly shows that small- and medium-sized businesses are embracing AI,” said Mark Greatrex, president of <span class="imUl cf1">Cox Communications</span>. “Leveraging AI to boost productivity and enhance the customer experience empowers entrepreneurs to take their business to the next level and prosper. &nbsp;With our generative AI practice at <span class="imUl cf1">RapidScale</span>, we are making it easier to realize the benefits faster.”</span></div><div><span class="fs12lh1-5"><br></span></div><div><b class="fs12lh1-5">Help Wanted</b></div><div><span class="fs12lh1-5">Currently, 75% of small business owners say they are responsible for their company’s AI implementation and operations. Even though more owners and employees say their company did not feel much impact from last year’s IT labor shortage, 42% of owners did see an impact, having experienced decreased revenue. Employees who saw an impact experienced:</span></div><div><span class="fs12lh1-5"><br></span></div><div><ul><li><span class="fs12lh1-5 cf2">43%: Added job responsibility</span></li><li><span class="fs12lh1-5 cf2">40%: Increased stress in the workplace</span></li><li><span class="fs12lh1-5 cf2">38%: Working longer hours</span></li></ul></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">“For SMBs with limited technology resources, building AI models specific to their business can be intimidating,” said Jeff Breaux, executive vice president of Cox Business. “But the engineers at RapidScale can make Generative AI accessible and guide businesses on the right deployments to improve a variety of use cases. From building the optimal data resources to training the machine learning models, we can make Generative AI achievable for a wider set of businesses looking for a powerful new growth engine.”</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">For more key survey findings, visit<span class="imUl cf1"> </span><span class="imUl cf1"><a href="http://www.coxblue.com/SmallBizSurvey" target="_blank" class="imCssLink">CoxBLUE.com/SmallBizSurvey</a></span>.</span></div></div></div>]]></description>
			<pubDate>Fri, 17 May 2024 10:19:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/Office-Group_thumb.webp" length="70162" type="image/webp" />
			<link>http://asianheritagesociety.org/blog/?survey-finds-small-businesses-see-artificial-intelligence-as-tool-for-growth</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000082</guid>
		</item>
		<item>
			<title><![CDATA[Artificial Intelligence Could Help Officers Screen Applicants for Asylum at Border]]></title>
			<author><![CDATA[Times of San Diego]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000081"><div>The <span class="cf1">Department of Homeland Security</span> is piloting artificial intelligence to train officers who review applicants for refugee status in the United States, Secretary Alejandro Mayorkas told reporters on Tuesday.</div><div><br></div><div>The work addresses what Mayorkas said is “labor-intensive” instruction that typically involves senior personnel. In this pilot, he said, DHS is training machines to act like refugees so officers can practice interviewing them.</div><div><br></div><div>“Refugee applicants, given the trauma that they have endured, are reticent to be forthcoming in describing that trauma,” he said. “So we’re teaching the machine to be reticent as well” and to adopt other “characteristics” of applicants.</div><div><br></div><div><aside></aside></div><div>The remarks, made on the sidelines of the security-focused <span class="cf1">RSA Conference</span> in San Francisco, elaborate on AI initiatives that DHS announced earlier this year. The department has said it planned to develop an interactive app to supplement its training of immigration officers, drawing on so-called generative AI that creates novel content based on past data.</div><div><br></div><div>Specifically, <span class="cf1">United States Citizenship and Immigration Services</span>, an agency within DHS, would build an AI program that tailored training materials to officers’ needs and prepare them to make more accurate decisions, the department said.</div><div><br></div><div><aside></aside></div><div>AI will not make immigration decisions themselves, DHS told Reuters. The AI will know country-specific conditions and other information to help officers, Mayorkas said.</div><div><br></div><div>The pilot adds to the many tests in industry and government seeking to reduce costs and improve performance through AI, particularly after ChatGPT’s viral launch in 2022. Such experimentation has not been without problems, including issues with translation, incorrect timeframes and pronouns.</div><div><br></div><div>Among more “advanced” deployments of AI, Mayorkas said the department has worked to spot anomalies when commercial trucks and passenger vehicles make border crossings. The goal, he said, is to help the department detect smuggling attempts for bringing fentanyl and other contraband into the United States.</div></div>]]></description>
			<pubDate>Mon, 06 May 2024 22:52:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/Migrants-from-Georgia_thumb.webp" length="83382" type="image/webp" />
			<link>http://asianheritagesociety.org/blog/?artificial-intelligence-could-help-officers-screen-applicants-for-asylum-at-border</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000081</guid>
		</item>
		<item>
			<title><![CDATA[Artificial Intelligence Has Big Potential in Education, But Teachers Must Be Ready]]></title>
			<author><![CDATA[Leonard Novarro and Rosalynn Carmen]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Ai_San_Diego"><![CDATA[Ai San Diego]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000007F"><div>Norah Dann, a 15-year-old student Grossmont High School student, is ambivalent about artificial intelligence. </div><div><br></div><div>After using Adobe Firefly, for example, to scroll through its 300 million images and come up with something new, she said, “It was not interesting in any way. It lacked creativity and had a formulaic quality about it.” Another attempt, to create a short story about a cat by prompting Chat GPT with instructions, was even less innovative. After all, a cat named “Whiskers”?</div><div><br></div><div><span class="fs12lh1-5">“I found as an image generator; it didn’t add anything,” said Norah. “It came up with very generic stuff.” Still, apart from some ethical reservations about using AI prompts, such as ChatGPT, Co-Pilot, BARD and META, to write fiction and compose music, she remains open to the possibilities AI provides.</span></div><div><span class="fs12lh1-5"><br></span></div><div>The problem: Many teachers aren’t.</div><div><br></div><div>A recent survey of 450 schools and universities by <span class="cf1">UNESCO</span> found that less than 10% have developed any policy covering the use of AI in education. A consensus of several other surveys indicates while 50% of public schools use AI in their admission process, only 10% of public school teachers have adopted any form of AI tools in the classroom. In contrast, 38% percent of private schools already use AI, while another 43% plan to adopt it this year.</div><div><br></div><div><aside></aside></div><div><span class="cf1">Francis Parker</span>, a private San Diego school, goes further. In November, Parker hosted a gathering of students’ parents and grandparents to acquaint them with the use of AI in the classroom. Denver Guess, the school’s director of curriculum alignment and instructional practice, emphasized how platforms like ChatGPT are not just tools to answer questions; they can boost learning. To illustrate his point, he asked adults in the room to hark back to their school days and share the technologies that they may have used, such as overhead projectors and pagers.</div><div>“AI can be a valuable educational tool when used responsibly and guided by teachers,” he told the audience.</div><div><br></div><div>However, artificial intelligence is more than pumping questions into a content creator like ChatGPT or META. Think of it as a field of computer science that creates systems to perform tasks such as problem solving and decision making that would ordinarily require human intelligence. </div><div>AI uses algorithms &nbsp;to pore over vast amounts — say billions — &nbsp;of pieces of information furnished by sources ranging from Google to your local telephone company. Then, based on the patterns it finds, the AI makes predictions. The process is called machine learning. Deep learning, a subset of machine learning, then mimics the human brain by using algorithms to create an artificial neuron network that will take the data it collects and convert it to a complex act, ranging from playing games to diagnosing cancer. </div><div><br></div><div><div>In addition to understanding data and solving difficult problems, AI can be used to explore future careers. It is for that reason precisely that Peter Sibley, CEO of <span class="cf1">Journeys Map</span>, a San Diego firm that matches student aptitude with potential careers, and Drew Schlosberg, a <span class="cf1">Classroom of the Future Foundation</span> advisory board member, lobbied to include artificial intelligence as an integral topic of this year’s CFF College and Career Pathways Summit held at National University in late February. </div><div><br></div><div><aside></aside></div><div>“We can’t call ourselves the Classroom of the Future Foundation if we don’t have in our summit a major footprint on the educational system,” said Schlosberg. “The schools are working very hard to close the equity gap.”</div><div><br></div><div>In addition to Sibley, students and teachers heard from Sai Huda, CEO of <span class="cf1">Cyber Catch</span>, a security firm; and Dr. Patrick Gittisriboongul, assistant superintendent of technology and innovation for the Lynwood Unified School District in Los Angeles. Gittisriboongul is also former assistant superintendent of innovation for the San Diego County Office of Education. </div><div><br></div><div>“Kids cannot afford to miss the boat,” said Gittisriboongul. “I would tell them that in some shade, way or form, AI will make its way into what they are doing. Already, AI is generating music. It’s an AI world, whether you think you are using it or not.”</div><div><br></div><div>The Classroom of the Future Foundation, under the auspices of the San Diego County Office of Education, was created in 1997 to serve as a regional technology hub that would shape the future for the county’s public schools — a future replete with “better jobs.” That’s the best way to “help underserved students get out of poverty,” said Jane Schlosberg, director of development and operation for the foundation. Despite the Office of Education and the foundation’s best efforts, that may not be as easy at it sounds. For example, one might ask: What jobs?</div><div><br></div><div><span class="fs12lh1-5">The magazine US News &amp; World Report authoritatively offers these top ten: nurse practitioner, financial manager, software developer, IT manager, physician assistant, medical and health services manager, security specialist, data scientist, actuary and speech language pathologist. Another job expert, </span><span class="fs12lh1-5 cf1">Indeed.com</span><span class="fs12lh1-5">, has its own top ten: mental health technician, loan officer, mental health therapist, electrical engineer, construction project manager, mechanical engineer, psychiatrist, human resources manager, senior accountant and data engineer.</span></div><div><span class="fs12lh1-5"><br></span></div><div><aside></aside></div><div>According to Sibley and Gittisriboongul, the top-ten future jobs may not be on either list. They should know. Gittisriboongul went from web developer to teacher when he was laid off during the dot-com bust. Sibley took the opposite path. Before he went on to own three companies, including one of the world’s largest repositories of academic standards, he was the dean of computer science at National University. After leaving he became an entrepreneur and was named the San Diego Business Journal’s CEO of the Year in 2023.</div><div><br></div><div>“When you ask students today what they want to be,” said Sibley, “the top professions you always hear are lawyers, engineers, health care workers.” However, according to him, these jobs are not what the future may hold. “Kids are told: ‘Follow your passion.’ The only problem is most people don’t know their passion,” he added.</div><div><br></div><div>“That may be the one thing that AI can be useful for.”</div><div><br></div><div><em>Leonard Novarro is vice president of the <a href="http://www.asianheritagesociety.org/" target="_blank" class="imCssLink"><span class="cf1">Asian Heritage Society</span> </a>and author of <a href="http://wordslingerbook.com/" target="_blank" class="imCssLink"><span class="cf1">WORDSLINGER: The Life and Times of a Newspaper Junkie</span>.</a> Rosalynn Carmen is president of the society and an AWS Certified Machine Learning Specialty and AWS Certified Dev Ops Engineer Professional.</em></div><div><aside><div bis_skin_checked="1"><br></div></aside></div></div></div>]]></description>
			<pubDate>Thu, 02 May 2024 16:13:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/SDSU-Students_thumb.webp" length="28380" type="image/webp" />
			<link>http://asianheritagesociety.org/blog/?artificial-intelligence-has-big-potential-in-education,-but-teachers-must-be-ready</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000007F</guid>
		</item>
		<item>
			<title><![CDATA[Qualcomm Forecast Beats Analysts’ Expectations as AI Energizes Smartphone Market]]></title>
			<author><![CDATA[Times of San Diego]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000080"><div><span class="cf1">Qualcomm</span> on Wednesday forecast fiscal third-quarter sales and adjusted profit above Wall Street expectations, driven by a faster-than-expected recovery in smartphone markets thanks to artificial-intelligence features.</div><div><br></div><div>The San Diego-based wireless pioneer forecast third-quarter sales and adjusted profit with midpoints of $9.2 billion and $2.25 per share, beating analyst estimates of $9.05 billion and $2.17 per share, according to LSEG data.</div><div><br></div><div>Shares were up 3% after the results in volatile after-hours trading.</div><div><br></div><div>Qualcomm is the world’s biggest supplier of chips for smartphones and counts both Apple and Samsung as customers. The company’s sales declined sharply last year following a boom during the pandemic. The drop was felt especially in the Android phone market where Qualcomm draws most of its business.</div><div><br></div><div>The company faces competitive pressure from China’s Huawei Technologies , which last year introduced a domestically made smartphone chip, and Taiwanese rival MediaTek, which last week said it expects rising sales this year as it gains market share among the premium-priced Android handsets.</div><div><br></div><div><div>For the fiscal second quarter ended March 24, Qualcomm’s sales and adjusted profit were $9.39 billion and $2.44 per share, respectively, above analyst expectations of $9.34 billion and $2.32, according to LSEG data.</div><div><br></div><div>Qualcomm is hoping to benefit from consumer demand to upgrade devices to run AI chatbots directly on the device rather than going over to a data center.</div><div><br></div><div>In a challenge to Apple, Qualcomm plans to release a chip designed to power laptops starting this summer, though that small amount of early sales is unlikely to play a major role in the company’s third-quarter forecast, analysts said.</div><div><br></div><div>In Qualcomm’s chip segment, the company forecast fiscal third-quarter sales with a midpoint of $7.8 billion, compared with analyst estimates of $7.74 billion, according to LSEG data.</div><div><br></div><div>Qualcomm predicted third-quarter patent-licensing sales with a midpoint of $1.3 billion, compared with estimates of $1.29 billion.</div><div><br></div><div><aside></aside></div><div>For the just-ended fiscal second quarter, Qualcomm said chip and licensing revenues were $8.03 billion and $1.32 billion, respectively, compared with analyst estimates of $7.95 billion and $1.32 billion, according to LSEG.</div><div><br></div><div>Within Qualcomm’s chip business, the company said that mobile handsets generated $6.18 billion in sales in the second quarter, compared with estimates of $6.23 billion, according to data from Visible Alpha. Automotive and Internet-of-Things chip revenues in the second quarter were $603 million and $1.24 billion, respectively, compared with analyst estimates of $578.9 million and $1.22 billion.</div></div></div>]]></description>
			<pubDate>Wed, 01 May 2024 22:47:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/Qualcomm-Trade-Show_thumb.webp" length="75456" type="image/webp" />
			<link>http://asianheritagesociety.org/blog/?qualcomm-forecast-beats-analysts--expectations-as-ai-energizes-smartphone-market</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000080</guid>
		</item>
		<item>
			<title><![CDATA[Tom York on Business: San Diego ‘Zoomers’ Face High Rents in Years Ahead]]></title>
			<author><![CDATA[Times of San Diego]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000007E"><div><span class="fs12lh1-5">The youthful members of Generation Z — those born between the mid 1990s and 2012 — will spend $145,000 on rent before turning 30 — 14% more than what the Millennial generation paid, according to a new <span class="imUl cf1">study</span> by apartment rental website <strong>RentCafe</strong>. &nbsp;</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">In San Diego, the so-called “Zoomers” will pay even more — a whopping $220,770.</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">According to the study, despite renting inflation, the cost is much less than home ownership for the generation, with the ownership costs climbing to $315,000 by the age of 30 — a figure that includes the mortgage, taxes, and fees, but not the down payment. </span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">“Renting is a no-brainer here in San Diego because the owning costs are much higher,” said a spokeswoman for RentCafe. “San Diego has the nation’s fifth highest cost difference between owning and renting for Gen Ze’rs younger than 30. Here, the cost difference reaches $94,093.”</span></div><div><span class="fs12lh1-5">The cities in the U.S. where Generation Z faces the highest costs for renting and owning are all in California, including San Jose, San Francisco and San Diego, she added. </span></div><div><span class="fs12lh1-5"><br></span></div><div><aside></aside></div><div><span class="fs12lh1-5">The most expensive place to rent is San Jose, where the cost for Gen Z’ers reaches $300,000 before turning 30 years old. </span></div><div><span class="fs12lh1-5"><br></span></div><div class="imTACenter"><strong class="fs12lh1-5">* * *</strong></div><div class="imTACenter"><strong class="fs12lh1-5"><br></strong></div><div><span class="fs12lh1-5">San Diego County’s investment fund which takes in retirement monies from more than 200 public agencies, has reached $18 billion — a new record, County Treasurer-Tax Collector <strong>Dan McAllister</strong> said in a news release recently.   </span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5"> This record follows another notable milestone — the office collected more than $1 billion in property taxes in a single day on April 8. </span></div><div><span class="fs12lh1-5"> “The record $18 billion investment pool, along with our ‘AAA’ rating, underscores the county’s commitment and financial acumen in managing the public’s money,” said McAllister. </span></div><div><span class="fs12lh1-5"><br></span></div><div><aside></aside></div><div><span class="fs12lh1-5">Mandatory participants in the pool include the county government, plus 42 local school districts, five community college districts and all of the county’s water and fire districts. </span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">Voluntary participants include the <strong>San Diego Regional Airport Authority</strong>, <strong>SANDAG,</strong> <strong>MTS </strong>and several other special government districts.  </span></div><div><span class="fs12lh1-5"><br></span></div><div class="imTACenter"><strong class="fs12lh1-5">* * *</strong></div><div class="imTACenter"><strong class="fs12lh1-5"><br></strong></div><div><span class="fs12lh1-5"><strong>Forbes</strong> magazine has named San Diego’s<strong> </strong><strong><span class="cf1">Axos Bank</span></strong>, a subsidiary of <strong>Axos Financial</strong>, to its 2024 list<strong> of America’s Best Banks</strong>. </span></div><div><span class="fs12lh1-5">In compiling the list, Forbes considered the 200 largest publicly traded banks and thrifts by assets and ranked the top 100.</span></div><div><span class="fs12lh1-5"><br></span></div><div><aside></aside></div><div class="imTACenter"><strong class="fs12lh1-5">* * *</strong></div><div class="imTACenter"><strong class="fs12lh1-5"><br></strong></div><div><span class="fs12lh1-5">Three-decade-old San Diego-based planning and engineering firm <strong>Latitude 33</strong> recently opened an office in downtown Los Angeles.</span></div><div><span class="fs12lh1-5">According to a news release, the firm’s high-profile work includes San Diego International Airport’s $3.4 billion Terminal 1 expansion and UC San Diego’s $2.5-billion-plus Hillcrest Medical Campus Redevelopment.</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">For more information, click <span class="imUl cf1"><a href="https://latitude33.com/" target="_blank" class="imCssLink">here</a></span>. </span></div><div class="imTACenter"><strong class="fs12lh1-5">* * *</strong></div><div class="imTACenter"><strong class="fs12lh1-5"><br></strong></div><div><aside></aside></div><div><span class="fs12lh1-5">San Diego home prices were up 9.5% year over year at the end of March, according to a report released by Orange County-based housing data provider <strong><span class="cf1">First American Data &amp; Analytics</span></strong>.</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">The company’s March 2024 <strong><span class="cf1">Home Price Index</span></strong> tracks home price changes less than four weeks behind real-time at the national, state and metropolitan levels</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">San Diego’s increase trailed only Anaheim and Miami, both of which were 10.5%.</span></div><div><span class="fs12lh1-5"><br></span></div><div class="imTACenter"><strong class="fs12lh1-5">* * *</strong></div><div class="imTACenter"><strong class="fs12lh1-5"><br></strong></div><div><span class="fs12lh1-5">San Diego’s <strong><span class="cf1">Chosen Foods</span></strong> has launched a line of three different sauces made with avocado oil. </span></div><div><span class="fs12lh1-5"><br></span></div><div><aside></aside></div><div><span class="fs12lh1-5">According to a news release, the sauces are made with natural flavors and have no seed oils or artificial ingredients.</span></div><div><span class="fs12lh1-5"><br></span></div><div class="imTACenter"><strong class="fs12lh1-5">* * *</strong></div><div class="imTACenter"><strong class="fs12lh1-5"><br></strong></div><div><span class="fs12lh1-5">San Diego’s <strong>California American Water</strong> said it is making available $8.3 million in bill relief for customers who faced financial hardship during the COVID-19 pandemic. </span></div></div>]]></description>
			<pubDate>Thu, 18 Apr 2024 07:53:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/apartment-406901_1280_thumb.webp" length="64144" type="image/webp" />
			<link>http://asianheritagesociety.org/blog/?tom-york-on-business--san-diego--zoomers--face-high-rents-in-years-ahead</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000007E</guid>
		</item>
		<item>
			<title><![CDATA[Qualcomm Has Created a ‘Digital Maestro’ with its Snapdragon Chips]]></title>
			<author><![CDATA[Leonard Novarro and Rosalynn Carmen]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Ai_San_Diego"><![CDATA[Ai San Diego]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000076"><div><em>Editor’s Note: This is the third in a series of articles about artificial intelligence in San Diego.</em></div><div>Several years ago, while attending a meeting of nonprofit societies, Irwin Jacobs, founder of <span class="cf1">Qualcomm</span>, arriving a little late, took an empty seat in the back of the room next to my wife, Rosalynn.</div><div><br></div><div>At some point, Rosalynn’s cell phone rang. As she promptly shut it off, she turned to Jacobs and apologized. Paraphrasing the last line of the Christmas movie “It’s a Wonderful Life,” Jacobs replied: “No need to apologize. Every time it rings, I get my wings.”</div><div><br></div><div>One of Qualcomm’s creations spawned by cell phones is increasingly spreading its own wings. It’s aptly named Snapdragon, after the bright and multicolored flower famous for its versatility. More so, this super chip is changing the face of technology and the future of artificial intelligence.</div><div>Qualcomm’s hierarchy is so enamored of this creation that it named San Diego State University’s new stadium Snapdragon — after shelling out $45 million for naming rights,</div><div><br></div><div><aside></aside></div><div>But Snapdragon is much more than a chip and stadium. Some call it a super brain. Others liken it to a symphony conductor who, with a baton, conjures a litany of different sounds from a variety of sources. And that’s basically what Snapdragon does in maestro-like fashion by orchestrating a complex variety of functions in smartphones, tablets and laptops — &nbsp;simultaneously.</div><div><br></div><div><div>Since its inception in 2007, Snapdragon has been a game-changer, merging multiple crucial functions into a single chip. The company likens it to a digital brainiac fueling advanced camera capabilities, seamless gaming experiences, and lightning-fast Internet connections. It’s like having a personal assistant, always one step ahead, anticipating your needs and delivering with precision.</div><div><br></div><div>Now, with the rapid development of artificial intelligence, Snapdragon chips are elevating devices to a whole new level by interpreting voice commands, processing queries, and delivering tailored responses with lightning speed. This chip is held in such esteem that conferences are held all over the world focusing on its functions. At a recent one in Hawaii, Cristiano Amon, president and CEO of Qualcomm, wowed his audience as he outlined the chip’s expanding capabilities.</div><div><br></div><div>“We are entering the era of AI, and on-device generative AI will play a critical role in delivering powerful, fast, personal, efficient, secure and highly optimized experiences,” said Amon. “Snapdragon is uniquely positioned to help shape and capitalize on the on-device AI opportunity and you will see generative AI going virtually everywhere that Snapdragon goes.” </div><div><br></div><div>Taking it a step further, it’s like a traveling workshop in which artisans craft with unparalleled precision while enhancing performance, reducing power consumption and unlocking all kinds of possibilities and AI-related tasks, from voice recognition to image processing by transforming devices into intuitive companions, empowering one to navigate the digital landscape with confidence and ease.</div><div><br></div><div><aside></aside></div><div>With such a wide spectrum of possibility, you can’t get more robust or colorful than that.</div><div><br></div><div><em>Leonard Novarro is vice president of the <a href="http://asianheritagesociety.org/index.php" class="imCssLink" onclick="return x5engine.utils.location('http://asianheritagesociety.org/index.php', null, false)"><span class="imUl cf1">Asian Heritage Society</span> </a>and author of <a href="http://wordslingerbook.com/" target="_blank" class="imCssLink"><span class="imUl cf1">WORDSLINGER: The Life and Times of a Newspaper Junkie</span>.</a></em></div></div></div>]]></description>
			<pubDate>Tue, 16 Apr 2024 14:25:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/Snapdragon_thumb.jpg" length="160981" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?qualcomm-has-created-a--digital-maestro--with-its-snapdragon-chips</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000076</guid>
		</item>
		<item>
			<title><![CDATA[AI Projects Adopted In Business Mobilize Up To $ 20 Million Per Year]]></title>
			<author><![CDATA[Times of San Diego]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Trends"><![CDATA[Trends]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000007D"><div>According to a Deloitte survey, 53% of companies adopting artificial intelligence spent more than $ 20 million in the last year on technology and talent.</div><div><br></div><div>The State of AI in Business Survey , based on 2,737 IT and industry executives, highlights how quickly AI applications are entering production. Of those surveyed by Deloitte, 26% are “seasoned adopters”, 47% “skilled adopters” and 27% “newbies”. Respondents were ranked based on AI adoption and systems put into production.</div><div><br></div><div>According to the research firm, 68% of seasoned adopters spent more than $ 20 million in the past year on AI. In addition, 81% of them confirmed a return on investment in less than two years.</div><div><br></div><div></div><div><span class="fs12lh1-5"><b>Improve decision making</b></span></div><div>Regarding the technological range of AI, 67% of respondents today use machine learning, 97% plan to do so, 54% use deep learning and 58% natural language processing, says Deloitte.</div><div><br></div><div>These AI enthusiasts see more efficient processes as the primary rationale for deployments, with improved decision-making also a key goal. AI adopters also typically buy more technology than they build, but only 47% of those surveyed said they use suppliers, Deloitte suggests.</div><div><br></div><div><strong><b>Other key results include:</b></strong></div><div><ul><li><span class="fs12lh1-5 ff1">45% said they had a high level of proficiency in integrating AI technology into their existing IT environment.</span></li><li><span class="fs12lh1-5 ff1">93% use AI in the cloud, 78% use open-source AI.</span></li><li><span class="fs12lh1-5 ff1">61% said they believe AI will dramatically transform their industry.</span></li><li><span class="fs12lh1-5 ff1">62% said they were very concerned about AI-related cybersecurity vulnerabilities, followed by failures impacting business operations and the use of personal data without consent. Responsibility and regulatory developments are also major concerns.</span></li><li><span class="fs12lh1-5 ff1">95% of those surveyed are concerned about the ethical risks associated with deploying AI.</span></li><li><span class="fs12lh1-5 ff1">62% of respondents believe that AI technologies should be heavily regulated.</span></li><li><span class="fs12lh1-5 ff1">The main beneficiaries of the success of artificial intelligence are the IT departments themselves</span></li></ul></div></div>]]></description>
			<pubDate>Mon, 15 Apr 2024 22:27:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/technology-3389904_1920_thumb.jpg" length="323021" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?ai-projects-adopted-in-business-mobilize-up-to---20-million-per-year-1</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000007D</guid>
		</item>
		<item>
			<title><![CDATA[Google and Qualcomm Announce New Version of Chrome for Snapdragon Chips]]></title>
			<author><![CDATA[Times of San Diego]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000077"><div>Google and <span class="cf1">Qualcomm</span> announced Tuesday a new version of the Chrome web browser that is optimized for computers using the San Diego company’s Snapdragon processor and running Windows.</div><div><br></div><div>The development means that Qualcomm chips can be used in place of legacy Intel or AMD processors on the most powerful Windows laptops.</div><div>“The new version of Google Chrome will help cement <span class="cf1">Snapdragon X Elite</span>‘s role as the premier platform for Windows PCs starting in mid-2024,” said Qualcomm President Cristiano Amon.</div><div><br></div><div><aside></aside></div><div>“The PC industry is on the cusp of an inflection point, and as we enter the era of the AI PC, we can’t wait to see Chrome shine by taking advantage of the powerful Snapdragon X Elite system,” Amon said.</div><div><br></div><div>Chrome for Windows on Snapdragon is downloadable now for anyone with an existing Windows on Snapdragon device.</div><div><aside></aside></div><div>“Our close collaboration with Qualcomm Technologies will help ensure that Chrome users get the best possible experience while browsing the Web,” said said Hiroshi Lockheimer, a Google senior vice president,</div><div><br></div><div>The two companies have been partners on Android phones since the very first device in 2008. Snapdragon chipsets also power many wearables with Google software, and the two companies reaffirmed their collaboration for upcoming XR devices in January.</div></div>]]></description>
			<pubDate>Mon, 15 Apr 2024 14:15:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/QUALCOMM-X-GOOGLE_thumb.jpg" length="32782" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?google-and-qualcomm-announce-new-version-of-chrome-for-snapdragon-chips-1</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000077</guid>
		</item>
		<item>
			<title><![CDATA[Gloria E. Ciriza Named First Female County Superintendent of Schools]]></title>
			<author><![CDATA[Times of San Diego]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Education"><![CDATA[Education]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000007C"><div>Gloria E. Ciriza will succeed Paul Gothold as the county’s superintendent of schools, becoming the first female superintendent in the <span class="cf1">San Diego County Office of Education</span>‘s 76-year history, the SDCOE announced Monday.</div><div><br></div><div>The county <span class="cf1">Board of Education</span>, which made the decision during a special meeting Saturday, is expected to consider Ciriza’s contract at its next regular meeting, on May 8. It’s anticipated she will assume the post on July 1.</div><div><br></div><div>Gothold is retiring after seven years in the role.</div><div><br></div><div class="imTACenter"><img class="image-2" src="http://asianheritagesociety.org/images/sos2.jpg"  title="" alt="" width="739" height="809" /><br></div><div class="imTACenter"><br></div><div class="imTACenter"><div style="text-align: start;">Ciriza is currently the SDCOE’s assistant superintendent of student services and programs, which includes the Juvenile Court and Community Schools, Special Education, Student Support, Student Wellness and School Culture, Whole Child and Community Design, and Outdoor Education departments, a statement from the office said.</div><div style="text-align: start;"><br></div><div style="text-align: start;">She joined SDCOE in March 2021 and has advanced SDCOE’s North Star goal to reduce poverty through public education, the department said.</div><div style="text-align: start;"><aside></aside></div><div style="text-align: start;">“I am thrilled to be selected and to continue advancing progress towards SDCOE’s North Star so that every child thrives in school, career and life,” Ciriza said in a statement. “It’s an honor to be the first woman in this role, to provide representation for young women and people of color, and to advocate for all students.”</div><div style="text-align: start;"><br></div><div style="text-align: start;">“I look forward to connecting with educational partners throughout our region and leading the organization with integrity, compassion and grace,” the statement continued.</div><div style="text-align: start;"><br></div><div style="text-align: start;"><div>According to the county office, Ciriza began her career as a substitute teacher in the National School District, then taught third, fourth, fifth and seventh grades in the San Diego and Poway districts.</div><div><br></div><div>She later served as an associate principal in the Poway and Chula Vista elementary school districts. During her tenure as principal, Heritage Elementary School became the highest-performing of 47 schools in CVESD.</div><div><br></div><div>“Dr. Ciriza is a human-centered leader who has demonstrated an ability to raise educational outcomes and success for all students, especially our most historically underserved students,” said Board of Education President Alicia Muñoz. “Kids need to see role models who look like them, and to be able to picture themselves in the workforce and in leadership positions.</div><div><br></div><div><aside></aside></div><div>“We have strong schools and tough challenges in our region,” she added. “With Dr. Ciriza at the helm, we look forward to deeper collaboration with our partner districts and increased academic opportunities for the 500,000 students we serve.”</div><div><br></div><div>Ciriza continued her career at CVESD as director of human resources; executive director of curriculum, instruction, and assessment; and assistant superintendent of instruction.</div><div><br></div><div>The Association of California School Administrators named her its 2018 Administrator of the Year for Curriculum and Instruction, and the California Association of Bilingual Educators named her Administrator of the Year in 2010.</div><div><br></div><div>She holds a bachelor’s degree in elementary education from Slippery Rock University, a master’s in education administration from National University, and a doctorate in educational leadership from San Diego State.</div><div><br></div><div>Besides being in charge of the SDCOE’s programs and services, the superintendent has the responsibility of approving district budgets, calling district elections and assisting with district emergencies.</div><div><br></div><div><aside></aside></div><div><em>City News Service contributed to this article.</em></div></div></div><div class="imTACenter"><br></div><div class="imTACenter"><br></div><div><figure><div bis_skin_checked="1"><br></div></figure></div></div>]]></description>
			<pubDate>Mon, 15 Apr 2024 00:51:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/SOS_thumb_9l5wrzir.webp" length="19404" type="image/webp" />
			<link>http://asianheritagesociety.org/blog/?gloria-e--ciriza-named-first-female-county-superintendent-of-schools</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000007C</guid>
		</item>
		<item>
			<title><![CDATA[US and Japan announce sweeping AI and tech collaboration]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000007B"><div class="imTAJustify">The US and Japan have unveiled a raft of new AI, quantum computing, semiconductors, and other critical technology initiatives.</div><div class="imTAJustify"><br></div><div class="imTAJustify">The ambitious plans were announced this week by President Biden and Japanese Prime Minister Kishida Fumio following Kishida’s Official Visit to the White House.</div><div class="imTAJustify"><br></div><div class="imTAJustify">While the leaders affirmed their commitment across a broad range of areas including defence, climate, development, and humanitarian efforts, the new technology collaborations took centre stage and underscore how the US-Japan alliance is evolving into a comprehensive global partnership underpinned by innovation.</div><div class="imTAJustify"><br></div><div><span class="fs12lh1-5"><b>AI takes centre stage</b></span></div><div class="imTAJustify">One of the headline initiatives is a $110 million partnership between the University of Washington, University of Tsukuba, Carnegie Mellon University, and Keio University. Backed by tech giants like NVIDIA, Arm, Amazon, and Microsoft—as well as Japanese companies—the program aims to solidify US-Japan leadership in cutting-edge AI research and development.</div><div class="imTAJustify"><br></div><div class="imTAJustify">The US and Japan also committed to supporting each other in establishing national AI Safety Institutes and pledged future collaboration on interoperable AI safety standards, evaluations, and risk management frameworks.</div><div class="imTAJustify"><br></div><div class="imTAJustify">In a bid to mitigate AI risks, the countries vowed to provide transparency around AI-generated and manipulated content from official government channels. Technical research and standards efforts were promised to identify and authenticate synthetic media.</div><div class="imTAJustify"><br></div><div><span class="fs12lh1-5"><b>Quantum leaps</b></span></div><div class="imTAJustify">Quantum technology featured prominently, with the US National Institute of Standards and Technology (NIST) partnering with Japan’s National Institute of Advanced Industrial Science and Technology (AIST) to build robust quantum supply chains.</div><div class="imTAJustify"><br></div><div class="imTAJustify">Trilateral cooperation between the University of Chicago, University of Tokyo, and Seoul National University was also announced to train a quantum workforce and bolster competitiveness. &nbsp;</div><div class="imTAJustify"><br></div><div class="imTAJustify">The US and Japan additionally welcomed new commercial deals including Quantinuum providing Japan’s RIKEN institute with $50 million in quantum computing services over five years.</div><div class="imTAJustify"><br></div><div class="imTAJustify">Several semiconductor initiatives were unveiled such as potential cooperation between Japan’s Leading-edge Semiconductor Technology Center (LSTC) with the US National Semiconductor Technology Center and National Advanced Packaging Manufacturing Program. The countries pledged to explore joint semiconductor workforce development initiatives through technical workshops.</div><div class="imTAJustify"><br></div><div class="imTAJustify">Other announced commercial deals spanned cloud computing, telecommunications, batteries, robotics, biotechnology, finance, transportation and beyond—highlighting how the alliance is fusing public and private efforts.</div><div class="imTAJustify"><br></div><div><span class="fs12lh1-5"><b>Developing humans</b></span></div><div class="imTAJustify">Initiatives around STEM education exchanges, technology curriculums, entrepreneur programs, and talent circulation efforts emphasised the focus on developing human capital to power the coming wave of digital innovation.</div><div class="imTAJustify"><br></div><div class="imTAJustify">While the technological breakthroughs grab attention, the proliferation of initiatives aimed at training, exchanging, and nurturing the innovators, researchers, and professionals across these domains could prove just as vital. The US and Japan appear determined to strategically develop and leverage human resources in lockstep with their efforts to establish cutting-edge AI, quantum, chip, and other advanced tech capabilities.</div><div class="imTAJustify"><br></div><div class="imTAJustify">Both nations clearly recognise that building complementary ecosystems across vital technologies is essential to bolstering competitiveness, economic prosperity, and national security in an era of intensifying strategic competition.</div></div>]]></description>
			<pubDate>Mon, 15 Apr 2024 00:50:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/usa-japan-ai-collaboration-artificial-intelligence-politics-government-quantum-computing-research-2048x1465_thumb.jpg" length="298170" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?us-and-japan-announce-sweeping-ai-and-tech-collaboration</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000007B</guid>
		</item>
		<item>
			<title><![CDATA[Regulators Not ‘Dazzled’ by AI Companies’ Attempts to Avoid Scrutiny]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000079"><div>Antitrust regulators are reportedly not convinced by artificial intelligence (AI) firms’ suggested hands-off policy.</div><div><br></div><div>Speaking at an antitrust conference in Washington last week, Federal Trade Commission (FTC) Chair Lina Khan said technology companies have attempted to “dazzle” policymakers with the promise of AI, but to no avail, Bloomberg News reported Friday (April 12).</div><div><br></div><div>“There’s no exemption from the laws prohibiting collusion, laws prohibiting price fixing, laws prohibiting monopolization, the laws prohibiting fraud,” she said. “The FTC is going to take action.”</div><div><br></div><div>As Bloomberg noted, the world’s antitrust agencies have grown worried that several of the most promising startups in the AI sector rely heavily on tech giants like Microsoft and Google for funding and infrastructure.</div><div><br></div><div>The concern, the report said, is that these companies are tying themselves to smaller ones to make sure they remain on top in the AI field.</div><div>According to the report, representatives from Big Tech companies spent the conference arguing that AI could transform the economy, with Google attorney Kent Walker comparing the technology to the mRNA vaccine tech used to combat COVID-19.</div><div><br></div><div>The AI industry has different dynamics, said Haidee Schwartz, OpenAI’s associate general counsel for antitrust, referring to artificial intelligence as a “positive disruptor” that can foster greater competition and bring growth to new industries.</div><div><br></div><div>Meanwhile, PYMNTS wrote last week about efforts by Big Tech companies to develop custom chips that bolster the efficiency and lower the costs of AI.</div><div><br></div><div>For example, Meta has introduced its latest generation of custom computer chips to strengthen its AI capabilities and reduce dependency on external suppliers like Nvidia. This news comes on the heels of Intel’s launch of an improved AI “accelerator” and comes as competitors like Google embrace in-house AI chip development. Experts said AI chips could boost commercial applications. </div><div><br></div><div>“From the business point of view, it lowers the bar for training per-customer, per-task models and moves away from just consuming APIs from providers of large language models for specialized and high-security use cases,” Amrit Jassal, co-founder and chief technology officer of Egnyte, which makes AI-powered software for businesses, said in an interview with PYMNTS. </div><div><br></div><div>As that report noted, custom chips could reduce AI costs for businesses. For now, the price of integrating generative AI into a business can vary substantially, from a few hundred dollars a month to several hundred thousand dollars for a custom solution based on a fine-tuned open-source model, according to software development firm Itrex. </div></div>]]></description>
			<pubDate>Sun, 14 Apr 2024 23:47:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/FTC-AI-1_thumb.jpg" length="71818" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?regulators-not--dazzled--by-ai-companies--attempts-to-avoid-scrutiny</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000079</guid>
		</item>
		<item>
			<title><![CDATA[Google Halts Links to California News Sites]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000007A"><div>Google is pulling links to California news sites ahead of pending news-focused legislation.</div><div><br></div><div>That bill, dubbed the California Journalism Preservation Act (CJPA), would require Google to pay for providing news content.</div><div><br></div><div>Jaffer Zaidi, Google’s vice president of global news partnerships, said in a Friday (April 12) blog entry that the tech giant would conduct “a short-term test for a small percentage of California users” to study how the proposed legislation would impact the company’s products. </div><div><br></div><div>“Until there’s clarity on California’s regulatory environment, we’re also pausing further investments in the California news ecosystem, including new partnerships through Google News Showcase, our product and licensing program for news organizations, and planned expansions of the Google News Initiative,” Zaidi wrote.</div><div><br></div><div>“To be clear, we believe CJPA undermines news in California,” he added. “We don’t take these decisions lightly,” and wanted to avoid “an outcome where all parties lose and the California news industry is left worse off.”</div><div><br></div><div>Buffy Wicks, the California assembly member behind the CJPA, told Bloomberg News last week that she would stay in dialogue with Google.</div><div>“This is a bill about basic fairness — it’s about ensuring platforms pay for the content they repurpose,” she said. “We are committed to continuing negotiations with Google and all other stakeholders to secure a brighter future for California journalists and ensure that the lights of democracy stay on.”</div><div><br></div><div>As PYMNTS wrote last year, the legislation would require tech giants such as Google and Meta to pay publishers a “journalism usage fee” when they use local news content and sell advertising along with it, and will require publishers to invest 70% of the profits from the fee in journalism jobs.</div><div><br></div><div>The bill has drawn the support of the 800-member California News Publisher Association (CNPA), which advocates for quality journalism, free press and fair compensation for locally produced news.</div><div><br></div><div>Meta, meanwhile, said it would remove news from Facebook and Instagram if the bill became law and it was forced to pay.</div><div>Were the CPJA to pass, it would create a “slush fund” that would benefit big media companies and that Meta would refuse to pay into, according to a company statement shared on social media last year.</div><div><br></div><div>“The bill fails to recognize that publishers and broadcasters put their content on our platform themselves and that substantial consolidation in California’s local news industry came over 15 years ago, well before Facebook was widely used,” the statement said.</div></div>]]></description>
			<pubDate>Sat, 13 Apr 2024 23:53:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/Google-News_thumb.jpg" length="35576" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?google-halts-links-to-california-news-sites</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000007A</guid>
		</item>
		<item>
			<title><![CDATA[A Court’s Rewriting of the Drug Development Process Endangers Patients in California]]></title>
			<author><![CDATA[Times of San Diego]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Ai_San_Diego"><![CDATA[Ai San Diego]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000068"><div>A recent appeals court ruling in California has posed a burning question for regulators, the life sciences industry, and patients. Who should be in charge of developing safe, life-saving medications: scientific experts and the FDA, or judges?</div><div><br></div><div>There’s a troubling answer — and disastrous implications — in <span class="imUl cf1"><a href="https://law.justia.com/cases/california/court-of-appeal/2024/a165558.html" target="_blank" class="imCssLink">the ruling</a></span>, which, if it stands, would endanger patients, undermine health equity, and slow progress towards life-saving medications for the U.S. and the world. In fact, the ruling threatens <em>every </em>field that depends on innovation, from software to the auto industry and aviation.</div><div bis_skin_checked="1"><figure><span class="fs15lh1-5 cf2 ff1"></span></figure></div><div>In a bizarre legal theory, the court found that businesses, including pharmaceutical companies, can face legal action for failing to develop a product, even if it is not proven to be safe or effective. It’s a strange finding that attacks the foundation of a well-established system that has driven incredible medical advances and ensures safe, effective drugs for every American.</div><div><br></div><div>The ruling is wrong-headed, even dangerous, for a host of reasons.</div><div><br></div><div>First, we already have a government body to regulate drug development. It’s the FDA. The court’s approach effectively challenges the FDA itself, as it preempts and overrides FDA decisions on drug development that already take account of these matters. This is profoundly disturbing and unsettling for its effects on settled national policy relative to drug development. &nbsp;If every jurisdiction begins to rule on just how fast companies should bring drugs to market, the result will be extremely chaotic and unsafe.</div><div><br></div><div><aside></aside></div><div>The ruling also ignores the importance of patient safety across the full drug development process. This is a complex, multifaceted, and highly technical effort, which takes years to advance a treatment from an idea in a laboratory to a proven, safe product on pharmacy shelves. Every step of that process is designed to safeguard the patient. If companies are forced to rush through it — based on arbitrary timelines set by judges and lawsuits — it will imperil people’s health, especially for those living with complex conditions like cancer and heart disease.</div><div><aside><br></aside><aside><div>With this approach, companies may be forced to skip necessary steps in research and development, including thorough clinical trials, which are critical for ensuring a medication is suitable for all patients. Shortened clinical trial timelines can result in an inadequate representation of diverse populations, including Black and brown people who are often <span class="imUl cf1"><a href="https://www.fda.gov/consumers/minority-health-and-health-equity/clinical-trial-diversity" target="_blank" class="imCssLink">underrepresented</a></span> in clinical trial research.</div><div><br></div><div>And as troubling as it is for California patients, the ruling’s effects will not stay contained to the state, or even the country. The U.S. is the world’s innovation engine — responsible for many of the medical and scientific breakthroughs that have enabled the miracle of modern longevity. Just consider the disease in this case, HIV/AIDS, which was once a death sentence but has now become a treatable chronic condition.</div><div><br></div><div>If the ruling stands, companies will be actively <em>disincentivized </em>from pursuing similar success stories. After all, what if a judge rules they should have pulled it off faster?</div><div><br></div><div>This chilling effect is especially dire in our world of more old than young. We need more health innovation, not less. With an unprecedented <span class="imUl cf1"><a href="https://www.census.gov/library/stories/2023/05/2020-census-united-states-older-population-grew.html" target="_blank" class="imCssLink">one-in-six Americans over 65</a></span> — and <span class="imUl cf1"><a href="https://www.who.int/news-room/fact-sheets/detail/ageing-and-health#:~:text=At%20this%20time%20the%20share,2050%20to%20reach%20426%20million." target="_blank" class="imCssLink">well over 1 billion people over 60 globally</a></span> — we need new treatments, vaccines, and strategies to enable healthy aging and mitigate the human, economic, fiscal, and societal impacts of complex, costly age-related health challenges. Penalizing the life sciences sector and upending the R&amp;D process is hardly the way forward.</div><div><br></div><div><aside></aside></div><div>And finally, why stop with pharma? The court’s logic could plausibly extend to software, cars, AI — any industry where advances save lives. These areas also happen to be precisely those that California prides itself on; the ruling would clearly deter investment in the state.</div><div><br></div><div>Whether to treat the most daunting and complex health conditions such as cancer, HIV, and cardiovascular disease, or to advance technology to improve our everyday lives, companies should not be forced to rush the process of creating new and improved products. And ensuring safety and efficacy is especially critical when products directly influence people’s health.</div><div><br></div><div>Health equity and healthy aging will not be achieved through rushed R&amp;D or judicial mandate. Science is delicate, and the process to develop safe and effective medicines to treat patients with complex conditions should be managed with care. California’s decision prioritizes speed over safety — a dangerous precedent for our health system, our aging communities, and the life-saving innovations that we all depend on.</div><div><br></div><div><em>Michael W. Hodin, Ph.D., is chief executive officer of the Global Coalition on Aging.</em></div></aside></div></div>]]></description>
			<pubDate>Fri, 29 Mar 2024 04:26:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/times-of-san-diego_thumb.jpg" length="99513" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?a-court-s-rewriting-of-the-drug-development-process-endangers-patients-in-california</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000068</guid>
		</item>
		<item>
			<title><![CDATA[Beware the Robot Invasion — Even if They Don’t Look Anything Like Arnold Schwarzenegger]]></title>
			<author><![CDATA[Times of San Diego]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Ai_San_Diego"><![CDATA[Ai San Diego]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000073"><div>At the risk of sounding too alarming, or even paranoid, I feel the need to let everyone know of a growing threat to our way of life. This threat has the potential to dominate every facet of our currently humanistic lives. I am calling everyone’s attention to the looming danger of robots! </div><div><br></div><div>Based on my recent observations, we seem to be just a few years away from surrendering many of our duties and responsibilities to robots. I’m no expert when it comes to robots, but I did see <em>The Terminator, </em>as well as<em> <span class="cf1">2001: A Space Odyssey</span></em>, and so I feel more than qualified to sound a warning about the ominous development of such a robot invasion. These invaders are already capable of making quite an impact on our work, shopping, sports and political worlds. </div><div bis_skin_checked="1"><figure><span class="fs15lh1-5 cf2 ff1"></span></figure></div><div>I suggest we not be seduced by the efficiency that robots promise. Losing the humanistic soulful part of our daily lives — even with its many foibles — along with the increasing loss of human autonomy is sure to find us eventually living in a dystopia. I fear having to one day utter the words “Open the door, Hal,” and not getting a favorable response. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</div><div><br></div><div>We are already witnessing the first step in the “robotization” of our society. &nbsp;I see this trend in the new kiosks that have been <span class="cf1">installed at local McDonalds</span>. After all, a kiosk is merely a robot that can’t move. Yes, I realize there are benefits. &nbsp;Who wants to deal with a flawed human worker when you can place your order with a super-efficient kiosk? When you go to the local public library, don’t worry about a distracted library clerk checking out your book. No, you can have the mistake-free kiosk do it instead. When shopping for goods at Ralphs or Costco, you now have the self-checkout option in the form of an automated cash register/kiosk. </div><div><br></div><div>Of course, one day the choice will become mandatory. In fact, it’s only a matter of time before the majority of workers at restaurants and grocery stores will be replaced by upgraded mobile robots dashing around the store/restaurant to meet our every need. They will surely be programmed via artificial intelligence to have pleasant personalities capable of charming and encouraging us to tip, though we won’t have to tip robots because they have no need for money, at least not yet. </div><div><br></div><div><div>Let me tell you how strong this trend toward robotization has become. If you go today to eat a meal at <span class="cf1">Pho Ca Dao restaurant</span> in Mission Valley, there is a possibility you will be served by an actual robot! &nbsp;I encourage you to pay a visit to Pho Ca Dao … once there you will not only get a great meal, but you will also be afforded the opportunity to take a glimpse into our future brave new world of robot workers. &nbsp;When I first ate at this restaurant, I naively thought the idea of robots waiters was a charming novelty. I didn’t fully grasp the disturbing, de-humanizing trend towards the robot dominance of our society, probably because these robots look goofy and not anything like Arnold Schwarzenegger. </div><div><br></div><div>Some say this ever-increasing robotization process is inevitable. If so, I guess we could meekly roll over and focus on the bright side. Think about it this way — the San Diego Padres have gone through about 20 different batting coaches since Petco Park opened in 2002. Why not try a robot for this job? How much worse could it get? Furthermore, we could pick out two or three local politicians, then replace them with robots. Would a robot screw up the city budget or pass bad municipal laws? &nbsp;Maybe, but maybe not. Perhaps we should give political robots the benefit of the doubt for at least one term. &nbsp;</div><div><br></div><div>Speaking of politics, the MAGA-in-Chief recently dropped a big hint that he may be wise to the robot invasion. At a <span class="cf1">recent campaign rally</span> near Dayton, Ohio, Donald Trump attempted to rile up his supporters by speaking about the evils of immigration. “I don’t know if you call them people,” he said at the rally. “In some cases they’re not people, in my opinion.” &nbsp;Critics assailed him for trying to de-humanize immigrants with an inane fascist-like declaration. </div><div><br></div><div>But did the critics miss the point? &nbsp;Could it be Trump was instead acknowledging that a percentage of the immigrant population is made up of foreign robots speaking languages no one has ever heard of? &nbsp;Should we credit Trump for slyly communicating the non-people robot threat via his usual robot dog-whistling technique? That might be too generous a move. &nbsp;Yet, the guy may be sharper than we think. Repeat after me…giraffe, tiger, whale, robot. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</div><div><br></div><div>Okay, I know what you are thinking. The allure of a Padres World Series championship is too great to be denied. If a robot can improve our team’s hitting skills, many of you will readily give a thumbs-up to robotization and insist on hiring that soulless batting instructor, thus further solidifying robot domination. If so, I say fine.</div><div><br></div><div><aside></aside></div><div>Just don’t come running to me the next time you try ordering a Filet-o-Fish sandwich at McDonalds and the kiosk screen reads “Big Mac,” and you get irritated and re-order the Filet-o-Fish sandwich, and the kiosk screen subsequently reads “I’m sorry Dave, I’m afraid I can’t do that.” Whether or not your name is Dave, it’s a good bet you are finally going to sense an impending doom.</div><div><br></div><div><em>Steve Rodriguez is a retired Marine Corps officer and high school teacher who last taught at <span class="cf1">Olympian High School</span> in Chula Vista.</em></div></div></div>]]></description>
			<pubDate>Wed, 27 Mar 2024 03:08:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/Amazon-Robots_thumb.jpg" length="196839" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?beware-the-robot-invasion---even-if-they-don-t-look-anything-like-arnold-schwarzenegger</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000073</guid>
		</item>
		<item>
			<title><![CDATA[There’s Good, Bad and Ugly in The Rise of Artificial Intelligence]]></title>
			<author><![CDATA[Leonard Novarro and Rosalynn Carmen]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Ai_San_Diego"><![CDATA[Ai San Diego]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000006C"><div><em>Editor’s Note: This is the second in a weekly series of articles about artificial intelligence in San Diego.</em></div><div><em><br></em></div><div>This is about three things — the good, the bad and the ugly.</div><div><br></div><div>Or…artificial intelligence.</div><div><br></div><div>The Industrial Revolution created a society driven by machines, while the AI revolution has ushered in a society in which machines are driven by algorithms. In both cases, the result is displacement of jobs and entire industries. For our generation, this began with the coronavirus.</div><div bis_skin_checked="1"><figure><span class="fs15lh1-5 cf1 ff1"></span></figure></div><div>The pandemic was good for both the future of artificial intelligence and the future of San Diego. AI took its biggest leap forward as techies made the most use of it, working in isolation or in tandem. San Diego, as a city, has leveraged the pandemic to enhance its artificial intelligence capabilities and expand its influence in the field through various initiatives and collaborations. </div><div><br></div><div><aside></aside></div><div>As a result, artificial intelligence is making deep inroads in the San Diego economy. More than a half-dozen surveys in the last two years indicate that rather than losing jobs, AI is growing them. More than three in five AI developers plan to add additional workers in the next year as demand for these technologies grows. Because these jobs tend to be very lucrative, they will have a strong ripple effect on the broader economy.</div><div><br></div><div>One in four, or roughly 25,000 to 30,000, San Diego County firms are using AI on some level, while 95% of these companies have already developed or adopted some combination of AI or ML, machine learning, a subset of AI that specifically develops algorithms enabling computers to perform tasks without specific programming. </div><div><br></div><div>The remaining 5% of companies are already planning to adopt some form of AI. The ripple effect will likely boost a plethora of related industries, as well as improve productivity, increase revenues and reduce costs in creating new products.,</div><div><br></div><div>That’s the good part.</div><div><br></div><div>Now the bad part. </div><div><br></div><div><aside></aside></div><div>Automation is expected to replace 800 million, or one in five, jobs globally by the year 2030. &nbsp;Job displacement, particularly for those who rely on routine tasks that can be easily automated, is a foremost concern.</div><div><br></div><div>Privacy is another. Already we can see a plethora of surveillance devices springing up everywhere. &nbsp;AI is only as good as the data pool in which it operates. And if that pool is filled with nonsense, negative opinions, or racial bias, the result could be disparate results in hiring, financial lending, arrests, and host of conspiracy theories. Haven’t we already seen that?</div><div><br></div><div><div>Economic disparity is another foregone conclusion. A typical entry job right now is bringing in $120,000 a year — but not without very expensive education or training. How accessible will that be for a youngster growing up and educated in communities like Chollas View or City Heights, particularly from new immigrant communities? Already, there is disparity: More than 60% of jobs in the field are held by white males, less than 25% by males of Asian descent and even less by women and other minorities.</div><div><br></div><div>Then there is the whole ethics issue. How much decision making are you willing to give up to a machine, especially one not trained in solving ethical dilemmas? &nbsp;If you have any doubts, try to get through customer service in most companies to resolve a problem and see fast you will reach an actual human being. Or not.</div><div><br></div><div>Jobs involving manual or repetitive labor, routine white-collar jobs, customer service and delivery — all will likely be obliterated. On the plus side, this could put an end to those annoying telemarketing calls.</div><div><br></div><div><aside></aside></div><div>Finally, there’s the ugly.</div><div><br></div><div>AI algorithms often require significant computational power, which translates to higher energy consumption. The manufacturing, operation, and cooling of data centers where AI computations are performed, already contribute to greenhouse gas emissions and energy consumption.</div><div><br></div><div>Those chips that run everything? &nbsp;As they become smaller and more densely packed with transistors, they give off more and more heat, requiring a cooling system that consumes extra energy.</div><div><br></div><div>The manufacturing process of computer chips involves the use of rare earth metals and other resources that are often sourced through environmentally damaging methods. Moreover, as AI technology advances, older chips become obsolete more quickly, leading to increased electronic or E-waste. More and more of these chips needed in greater number means more and more extraction from the ground, resulting in habitat destruction, water pollution and loss of biodiversity. Impact on the environment can be — and most likely will be — devastating.</div><div>Taking the human out of the equation, as AI makes more and more decisions for us, could have catastrophic results, in the opinion of many. The political scene has already shown us how pure fakery can influence a large part of the U.S. population into accepting abnormal behavior as normal.</div><div><br></div><div><aside></aside></div><div>Lastly, increasing social isolation is inevitable as AI-driven tools take over many everyday functions, thereby reducing face to face human connection and weakening interpersonal relationships.</div><div><br></div><div>Some would say we are already there.</div><div><br></div><div><em>Leonard Novarro is vice president of the <span class="imUl cf2"><a href="http://asianheritagesociety.org/index.php" class="imCssLink" onclick="return x5engine.utils.location('http://asianheritagesociety.org/index.php', null, false)">Asian Heritage Society</a></span> and author of <a href="http://wordslingerbook.com/" target="_blank" class="imCssLink"><span class="imUl cf2">WORDSLINGER: The Life and Times of a Newspaper Junkie</span>..</a> Rosalynn Carmen is president of the society and an AWS Certified Machine Learning Specialty and AWS Certified Dev Ops Engineer Professional.</em></div></div></div>]]></description>
			<pubDate>Tue, 26 Mar 2024 10:28:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/AI-cover-1_thumb.jpg" length="267794" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?there-s-good,-bad-and-ugly-in-the-rise-of-artificial-intelligence</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000006C</guid>
		</item>
		<item>
			<title><![CDATA[Liberals in la-la land: High wages, 32-hour workweeks sound great, but there's a steep price]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Trends"><![CDATA[Trends]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000072"><div class="imTACenter"><iframe width="560" height="315" src="https://www.youtube.com/embed/CUZoNfSuWT8?si=_pWHdNIrGheL_EzY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div><div class="imTACenter"><span class="imTALeft fs36lh1-5">I</span><span class="imTALeft fs12lh1-5"> recently ha</span><span class="imTALeft fs12lh1-5">d a layover at the Minneapolis-Saint Paul airport on my way to visit my parents in Oregon, so I stopped at McDonald’s for a quick bite. Rather than being greeted by a human cashier, I was met with a hall of self-serve kiosks, where I placed my order and paid for it.</span><br></div><div class="imTALeft"><div data-t="{"n":"intraArticle","t":13}"><slot name="BB1kZK7g-intraArticleModule-0"></slot></div><div>Expect to see a lot more machines and far fewer human workers in states and cities that are artificially driving up the cost of employees through higher minimum wages. </div><div><br></div><div>“The government seems stuck on this way of fixing something that doesn't need to be fixed,” Brian Wesbury, chief economist at First Trust Advisors, told me. “It messes up the marketplace, and businesses attempt to find a way around it because these are not market-based wages – and today with robotics and computers they can. So it ends up hurting people.” </div><div class="imTACenter"><img class="image-2" src="http://asianheritagesociety.org/images/tt.jpg"  title="" alt="" width="768" height="1024" /><br></div><div>While efficient, <span class="cf1">automation like those ordering screens</span> at the Minneapolis airport is emblematic of what happens when the government distorts the marketplace with a heavy-handed regulatory approach. </div><div><br></div><div>Minneapolis has <span class="cf1">mandated a $15.57 hourly minimum wage</span> – more than twice the <span class="cf1">federal minimum wage of $7.25</span> – for large employers, but that wage will apply to all businesses starting this summer. While the airport isn’t technically part of any city, its employers are no doubt forced to offer comparable wages to attract workers.</div><div><br></div><div><div>High wages are having other effects, too. <span class="cf1">Minneapolis residents will soon be out of luck</span> if they want to call an Uber or a Lyft. Both companies are leaving town in May after the <span class="cf1">ultra-liberal city council</span> (several of the 13 are declared <span class="cf1">socialists</span>) applied the minimum wage to drivers, overriding the mayor’s veto. The companies said the mandate makes operations in the city unsustainable. </div><div><br></div><div data-t="{"n":"intraArticle","t":13}"><slot name="BB1kZK7g-intraArticleModule-1"></slot></div><div>So in the effort to increase pay for drivers, the city council effectively will <span class="cf1">strip thousands of jobs</span> and leave many people without transportation. </div><div>As Democratic Mayor Jacob Frey said in an interview, “Getting a raise doesn’t do a whole lot of good <span class="cf1">if you lose your job</span>.”</div><div><br></div><div>Nice work, Minneapolis.</div></div><div><hr><div class="imHeading4">California hikes minimum wage, employers lay people off</div><div><br></div><div>Then there’s California. In what should have been an April Fools’ joke, <span class="cf2">a law requiring fast-food workers at large chains to earn $20 a hour took effect April 1</span>. </div><div><br></div><div>Gov. Gavin Newsom, a Democrat, signed the law last year. Obviously, businesses aren’t happy <span class="cf2">because it's bad for their bottom lines</span>. </div><div><br></div><div data-t="{"n":"intraArticle","t":13}"><slot name="BB1kZK7g-intraArticleModule-2"></slot></div><div><strong>Biden's new rule may take your job: </strong><span class="cf2">Biden claims to stand for women, but his new regulation will kill jobs that women want</span></div><div>Newsom admitted as much when he tried to give his buddy Greg Flynn, who runs Panera Bread franchises in California, a loophole from the law. Flynn is a big <span class="cf2">Newsom donor</span>, and Newsom <span class="cf2">had demanded a curious exemption to the law</span> for restaurants “making in-house bread.” </div><div><br></div><div class="imTACenter"><img class="image-0" src="http://asianheritagesociety.org/images/BB1hoZmt.jpg"  title="" alt="" width="768" height="512" /><br></div></div><div><br></div><div><div>After the justified uproar that Flynn was getting special favors, <span class="cf2">he has said he’ll abide by the higher wage</span>.</div><div>It’s no surprise that even before the new minimum wage became reality, <span class="cf2">restaurants started planning layoffs</span>. For instance, <span class="cf2">Pizza Hut has said</span> it will cut more than 1,000 delivery jobs. Many more are following suit.</div></div><div><br></div><div><div>As any economist could have predicted, these businesses are having to downsize their workforce, reduce hours and raise prices. <span class="cf2">That’s what happens</span> when the government meddles in the private market. </div><div><br></div><div>It’s hard to see how this benefits anyone in the long run. Minimum wage jobs have traditionally existed to give people an entry point into the work world, but government-driven inflated wages will take those opportunities away from inexperienced workers. </div><div><br></div><div>And this government intervention ignores that workers have more choices than ever. </div><div><br></div><div>“It’s such a competitive marketplace and unemployment is so low that if you’re disappointed in the job in either the culture or the wage or the working conditions, you can move,” Wesbury said.</div></div><div><hr><div class="imHeading4">Less work for same pay? Welcome to Bernie’s world. </div><div><br></div><div>You can always count on Congress’ resident socialist, Vermont Sen. Bernie Sanders, to come up with truly wild (and costly) ideas. He’s a constant pusher of <span class="cf2">“free” college, student debt forgiveness</span> and high <span class="cf2">minimum wages</span>.</div><div><br></div><div class="imTACenter"><img class="image-1" src="http://asianheritagesociety.org/images/AA1aKs8I.jpg"  title="" alt="" width="768" height="512" /><br></div><div class="imTACenter"><br></div><div><strong>Employers must speak up: </strong><span class="cf2">Biden's overtime pay proposal is the last thing our economy needs</span></div><div><span class="cf2"><br></span></div><div>Sanders also says Americans <span class="cf2">deserve a 32-hour work week</span>. Employers would be forced to continue paying workers the same pay and benefits as they get for working 40 hours. And he’s not just thinking about it – <span class="cf2">he’s introduced a bill</span>. </div><div><br></div><div>Sounds pretty darn good, I have to say. </div></div><div><br></div><div><div>Unfortunately, in the real world, companies would have to make adjustments to afford this cushy new employee benefit. Employers would either have to hire more workers or lose out on productivity, and consumers would face higher prices as a result. Other unintended consequences would surely follow.</div><div><br></div><div>Bottom line: The private sector works best when the government gets out of the way. It’s a lesson liberals never seem to learn. </div><div><br></div><div><em>Ingrid Jacques is a columnist at USA TODAY. Contact her at ijacques@usatoday.com or on X, formerly Twitter: @</em><em><span class="cf2">Ingrid_Jacques</span></em></div></div><div><br></div><div><br></div><div><br></div><div><hr></div></div></div>]]></description>
			<pubDate>Sun, 24 Mar 2024 08:30:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/cal1_thumb.jpg" length="75404" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?liberals-in-la-la-land--high-wages,-32-hour-workweeks-sound-great,-but-there-s-a-steep-price</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000072</guid>
		</item>
		<item>
			<title><![CDATA[San Diego Has Opportunity to Emerge as Leader in Artificial Intelligence]]></title>
			<author><![CDATA[Leonard Novarro and Rosalynn Carmen]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Ai_San_Diego"><![CDATA[Ai San Diego]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000064"><div class="imTACenter"><img class="image-0" src="http://asianheritagesociety.org/images/Inventory-Robot.jpg"  title="" alt="" width="1020" height="574" /></div><div><div class="imTACenter"><span class="fs10lh1-5 cf1 ff1">inventory robot using software from San Diego’s Brain Corp. Courtesy of the company</span></div></div><div><br></div><div><div bis_skin_checked="1"><em><span class="fs12lh1-5">Editor’s Note: This is the first in a weekly series of articles about artificial intelligence in San Diego.</span></em></div><div bis_skin_checked="1"><em><span class="fs12lh1-5"><br></span></em></div><div bis_skin_checked="1"><span class="fs12lh1-5">My wife Rosalynn and I are the yin and yang of expectation. History is my inspiration, the future, however disruptive, is hers.</span></div><div bis_skin_checked="1"><span class="fs12lh1-5"><br></span></div><div bis_skin_checked="1"><span class="fs12lh1-5">The philosophy of mutual forces working in opposite but interconnected direction originated in China around the time of Confucius. Some 2,500 years later, it’s alive in the way we have looked at things. And one of those things is technology. While Rosalynn embraces it, my motto has been “If it ain’t broke, don’t fix it.”</span></div><div bis_skin_checked="1"><figure></figure></div><div bis_skin_checked="1"><span class="fs12lh1-5">Until artificial intelligence.</span><span class="fs12lh1-5"> </span></div></div><div bis_skin_checked="1"><br></div><div bis_skin_checked="1"><div bis_skin_checked="1"><span class="fs12lh1-5">Some 25 years ago, two years before we started our bi-weekly newspaper ASIA, Yahoo! and Google convinced the newspaper industry that it needed to contribute articles to the Internet and aggregate websites to market its material, which led many newspaper readers to conclude: “Why pay for news when you can get it for free.” That marked the death knell for newspapers, including ours.</span></div><div bis_skin_checked="1"><br></div><div bis_skin_checked="1"><aside></aside></div><div bis_skin_checked="1"><span class="fs12lh1-5">As for Google, well…that is history.</span></div><div bis_skin_checked="1"><br></div><div bis_skin_checked="1"><span class="fs12lh1-5">We are not making the same mistake with AI — to ignore it. Yet, while we do embrace it, we do with a caveat. This is the first in a series of articles about what we see as bad as well as good in AI. We will also look at many of the changes already taking place and what is predicted for the future.</span></div><div bis_skin_checked="1"><br></div><div bis_skin_checked="1"><span class="fs12lh1-5">A significant change is how San Diego has become a leader in this technology.</span></div><div bis_skin_checked="1"><br></div><div bis_skin_checked="1"><span class="fs12lh1-5">When I moved to San Diego in 1984 to work for the old San Diego Tribune, this, indeed, was a sleepy town. Horton Plaza and the revitalized downtown, most markedly the Gaslamp District, was to change everything, so that by 2015 National Geographic, in a video on “startup cities,” ranked San Diego as “a best place for startups.” A strong entrepreneurial spirit fueled by a culture of collaboration, enhanced by an enviable academic structure led by</span><span class="fs12lh1-5"> </span><span class="imUl fs12lh1-5"><a href="https://ucsd.edu/" target="_blank" class="imCssLink">UC San Diego</a></span><span class="fs12lh1-5"> </span><span class="fs12lh1-5">was the key.</span></div><div bis_skin_checked="1"><br></div><div bis_skin_checked="1"><span class="fs12lh1-5">Industry-wise, San Diego also happened to be the home of a company called</span><span class="fs12lh1-5"> </span><span class="imUl fs12lh1-5"><a href="https://www.qualcomm.com/" target="_blank" class="imCssLink">Qualcomm</a></span><span class="fs12lh1-5">, which, in the next nine years, after the National Geographic piece, would become one of the most, if not THE most, respected company in the field of AI technology.</span></div><div bis_skin_checked="1"><br></div><div bis_skin_checked="1"><aside></aside></div><div bis_skin_checked="1"><aside></aside></div><div bis_skin_checked="1"><span class="fs12lh1-5">Take</span><span class="fs12lh1-5"> </span><span class="imUl fs12lh1-5"><a href="https://www.qualcomm.com/snapdragon/overview" target="_blank" class="imCssLink">Snapdragon</a></span><span class="fs12lh1-5">, the super brain of chips. Developed by Qualcomm in 2007. This revolution in microchip development allows several functions to be executed over multiple devices, such as mobile phones, laptop computers and house alarms, with a single command. Industry savants have likened it to having a powerhouse mini-computer in the palm of your hands with the ability, for instance, to drive a car, operate on the Internet, and master one’s phone, all at the same time and across different platforms.</span></div><div bis_skin_checked="1"><br></div><div bis_skin_checked="1"><span class="fs12lh1-5">That’s only a sliver of the work going on right now in San Diego in the field of artificial intelligence. The</span><span class="fs12lh1-5"> </span><span class="imUl fs12lh1-5"><a href="https://contextualrobotics.ucsd.edu/" target="_blank" class="imCssLink">Contextual Robotics Institute</a></span><span class="fs12lh1-5"> </span><span class="fs12lh1-5">of UCSD is another. The goal of the institute is to make robots understand their surroundings, learn from it and use that data to serve people in a number of ways, as first responders and companions, for example. To do so, UCSD brings together experts in various fields, from computer science to neuroscience, sharing labs to integrate the technology needed to move robots.</span></div><div bis_skin_checked="1"><br></div><div bis_skin_checked="1"><span class="fs12lh1-5">Other companies in San Diego taking part in robotic research include the</span><span class="fs12lh1-5"> </span><span class="imUl fs12lh1-5"><a href="https://www.braincorp.com/" target="_blank" class="imCssLink">Brain Corporation</a></span><span class="fs12lh1-5">, which develops software to command robots in menial tasks, and Dexcom, which is working on medical devices to manage diabetes. In both cases, robots will have the ability to sense and interpret their environments and use that to make decisions, including reducing errors in manufacturing and improving quality control in manufacturing.</span></div><div bis_skin_checked="1"><br></div><div bis_skin_checked="1"><span class="fs12lh1-5">A host of other companies, from Acrisure Innovtion to ZS, are engaged in a wide range of AI research affecting dozens of industries. ZS alone employs about 13,000 people working out of 14 different offices. Another company, Motorola Solutions, focuses primarily on ways to protect people and property and employs another 21,000 people.</span></div><div bis_skin_checked="1"><br></div><div bis_skin_checked="1"><span class="fs12lh1-5">Other companies are focused on dozens of industries, including health care, aerospace, insurance and all facets of manufacturing. The AI industry is growing at such a rapid pace in San Diego that the</span><span class="fs12lh1-5"> </span><span class="imUl fs12lh1-5"><a href="https://www.sandiegobusiness.org/" target="_blank" class="imCssLink">San Diego Regional Economic Development Corporation</a></span><span class="fs12lh1-5">, partnering with Booz-Allen-Hamilton and members of the AI community, is identifying clusters where AI and ML (machine learning) have been implemented and assessing the results.</span></div><div bis_skin_checked="1"><br></div><div bis_skin_checked="1"><aside></aside></div><div bis_skin_checked="1"><span class="fs12lh1-5">The partnership has already concluded that instead of wiping jobs off the map, AI technology will create more jobs than it will destroy.</span></div><div bis_skin_checked="1"><span class="fs12lh1-5">Silicon Valley is still the center of the tech and AI universe, but San Francisco as its urban partner has been declining for many reasons, including cost of living and homeless issues. Can San Diego succeed the Bay Area as the new urban center of tech, principally AI? Many feel it already has. What emerges in the next five years will, indeed, be interesting to watch.</span></div><div bis_skin_checked="1"><br></div><div bis_skin_checked="1"><em><span class="fs12lh1-5">Leonard Novarro is vice president of the</span><span class="fs12lh1-5"> </span><span class="imUl fs12lh1-5"><a href="http://asianheritagesociety.org/" onclick="return x5engine.imShowBox({ media:[{type: 'iframe', url: 'http://asianheritagesociety.org/', width: 1920, height: 1080, description: ''}]}, 0, this);" class="imCssLink">Asian Heritage Society</a></span><span class="fs12lh1-5"> </span><span class="fs12lh1-5">and author of</span><span class="fs12lh1-5"> </span><span class="fs12lh1-5"><a href="http://wordslingerbook.com/" target="_blank" class="imCssLink"><span class="imUl fs12lh1-5">WORDSLINGER: The Life and Times of a Newspaper Junkie</span>.</a></span></em></div></div></div>]]></description>
			<pubDate>Wed, 20 Mar 2024 22:00:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/San-Diego-2_thumb.jpg" length="80921" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?opinion--san-diego-has-opportunity-to-emerge-as-leader-in-artificial-intelligence</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000064</guid>
		</item>
		<item>
			<title><![CDATA[Following State Law, Fast Food Workers’ Wages Now Starting at $20 an Hour]]></title>
			<author><![CDATA[Times of San Diego]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Trends"><![CDATA[Trends]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000075"><div>Fast-food workers across the state, including San Diego, were celebrating a boost to their minimum wage Monday, thanks to a new California law taking effect at the beginning of April.</div><div><br></div><div>The minimum wage jumped to $20 an hour for fast-food workers, effective Monday.</div><div><br></div><div>Backers of the measure — including Gov. Gavin Newsom — have called it essential to provide workers with a livable wage, but restaurant industry officials threaten that it could lead to increased prices or an increase in the use of technology that could affect jobs.</div><div><br></div><div>During a virtual Monday morning news conference, representatives of the <span class="cf1">Service Employees International Union</span>, which represents about 2 million workers in the healthcare, public sector, and property sectors, and the New York-based Roosevelt Institute think tank insisted that restaurant chains can absorb the increased labor cost without need for raising prices or eliminating positions.</div><div><br></div><div>The institute issued a <span class="cf1">report</span> last week concluding that higher wages do not have to translate to higher prices and fewer jobs.</div><div><aside></aside></div><div>“There is one big reason why that is, we actually point to it in our report,” Ali Bustamante, deputy director of Worker Power and Economic Security at the Roosevelt Institute said.</div><div><br></div><div>“We find that prices over the past 10 years, over the past decade, in the fast food industry increased by 46.8%, compared to 28.7% overall (in the restaurant industry).”</div><div><br></div><div>“One of the reasons that the prices have gone up a lot faster in the fast food industry relative other industries is the fact that markup has also gone up, which is basically the difference between prices and the actual operation costs that businesses incur in order to render their prices,” Bustamante said.</div><div><br></div><div>Bustamante said the most “unrealistic assumption” puts the cost of increasing the minimum wage at $4.6 billion. He added that excess profits that corporations are taking home could easily pay for a wage hike.</div><div><br></div><div>The institute’s report also found that many fast food workers in the state had already begun to earn more than $16 an hour or more, and the increase to $20 an hour for some fast food operators will not be an automatic $4 increase for each worker.</div><div><br></div><div><aside></aside></div><div>Angelica Hernandez, a fast food worker represented by SEIU, said the increase will help her breathe a “little easier” in terms of paying her rent and buying groceries. She said the wage bump is a “huge raise” and that she and her colleagues will continue to fight for better wages and working conditions.</div><div><br></div><div>Representatives for Chipotle, McDonald’s and Jack in the Box did not immediately respond to a request for comment, nor did officials with Yum! Brands, which owns Pizza Hut and other fast food companies such as Taco Bell and KFC.</div><div><br></div><div>Industry representatives, including the <span class="cf1">California Restaurant Association</span>, had indicated that the wage increase would be burdensome to some owners, noting that many fast-food outlets are operated by small business owners under franchise agreements with restaurant chains.</div><div><br></div><div>San Diego-<span class="cf1">based</span> Jack in the Box’s chief executive Darin Harris said the company would depend on upward price adjustments, expecting menu prices to increase from 6% to 8%, Nasdaq reported in early March.</div><div><br></div><div>During an earnings call in February, Chipotle’s chief financial and administrative officer, Jack Hartung, had said the company would need to impose a “mid-single-digit price increase” in California to cover the wage increase.</div><div><br></div><div><aside></aside></div><div>Chipotle has yet to make an official announcement on new prices and other fast food companies, such as McDonald’s, Jack in the Box, and Starbucks have also said they are considering increasing their menu items or changes in their operations.</div><div><br></div><div>At some southern California Starbucks locations, prices of select individual drinks went up Monday morning, some by as much as 50 cents.</div><div>The Los Angeles Times <span class="cf1">reported</span> Monday that McDonald’s was “exploring several ways to counterbalance the increase in labor costs and yet to decide how much it will raise the price of the menu items at its corporate-owned stores.”</div><div><br></div><div><aside></aside></div><div>The company provides “informed pricing recommendations” as its franchise locations, but final pricing is at the discretion of franchisees.</div><div>Starbucks officials told The Times that the company “elected to increase wages for all employees regardless of their level of experiences.”</div><div><aside></aside></div><div>Two Pizza Hut operators had previously announced plans to lay off more than 1,200 delivery drivers in Los Angeles, Orange and Riverside counties to prepare for the minimum wage boost. Pizza Hut franchises planned to pivot toward third-party apps like DoorDash, GrubHub and UberEats for pizza and food delivery.</div><div><br></div><div>Yum! Brands previously stated “its franchisees independently own and operate their restaurants in accordance with local market dynamics and comply with all federal, state, and local regulations while continuing to provide quality service and food to our customers via carry out and delivery.”</div><div><br></div><div>Michael Reich, a professor of economics at UC Berkeley and the chair of the Center on Wage and Employment Dynamics, pushed back against the narrative that fast food companies need to increase prices to cover wage increases.</div><div><br></div><div>He described the fast food industry as “very healthy and growing fast.” Reich said “sales have gone up and of course profits have gone up as well,” but wages have lagged compared to the wages for the top 20% of the workforce. Reich also warned that when discussing minimum wage increases and price hikes, the two are correlated, but not necessarily a causation.</div><div><br></div><div>For example, he noted, according to McDonald’s reports for its Fourth Quarter and full year results in 2023, the company’s gross profit was more than $14 billion, a 10.26% increase from 2022. Global comparable sales have grown 9% in 2023, and over 30% since 2019, as well, he said.</div><div><aside></aside></div><div>“Our global comparable sales growth of 9% for the year is a testament to the tremendous dedication of the entire McDonald’s system,” McDonald’s president and chief executive officer Chris Kempczinski said in a <span class="cf1">statement</span> issued in February.</div><div><br></div><div>“Strong execution of our Accelerating the Arches strategy has driven over 30% comparable sales growth since 2019 as our talented crew members, and the industry’s best franchisees and suppliers have demonstrated proven agility with a relentless focus on the customer. By evolving the way we work across the system, we remain confident in the resilience of our business amid macro challenges that will persist in 2024.”</div><div>Chipotle’s Fourth Quarter and full year results in 2023 also showed growth, as total revenue increased by 14.3% to $9.9 billion from 2022. The company also opened a total of 271 new restaurants, according to a company statement.</div><div><br></div><div>Last year “was an outstanding year where we delivered strong transaction growth driven by throughput and menu innovation, opened a record number of new restaurants, surpassed $3 million in AUVs (average-unit volume or how much chains are earning per store measured on a mature base) and formed our first international partnership,” Brian Niccol, chairman and chief executive officer of Chipotle, said in a <span class="cf1">statement</span> issued February.</div><div><br></div><div>The law, <span class="cf1">Assembly Bill 1228</span>, &nbsp;boosts fast food workers’ earnings from the state’s minimum wage of $16 per hour to $20 per hour. Additionally, the law also establishes a Fast Food Council, representing a path forward to resolve “employer-community concerns while preserving fast food workers by securing a seat at the table to raise standards,” according to the office of Assemblyman Chris Holden, D-Pasadena, who introduced the bill.</div><div><br></div><div><aside></aside></div><div>The council will consist of nine voting members, consisting of representatives of the fast food industry, franchisees, employees, advocates, one unaffiliated member of the public and two non-voting members, who will provide direction and coordinate with state powers to ensure the healthy, safety and employment of fast food workers.</div><div><br></div><div>Responsibilities of the council will also include development of fast food worker standards, covering wages, working conditions and training.</div><div>AB 1228 will affect more than 550,000 fast food workers and about 30,000 restaurants in the state, officials said.</div><div><br></div><div><em>City News Service contributed to this report.</em></div></div>]]></description>
			<pubDate>Sun, 17 Mar 2024 03:58:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/Amazon_thumb.jpg" length="119484" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?following-state-law,-fast-food-workers--wages-now-starting-at--20-an-hour</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000075</guid>
		</item>
		<item>
			<title><![CDATA[Carlsbad-Based A.I. Startup Enlists UCSD Health’s Kader to Chair Advisory Board]]></title>
			<author><![CDATA[Times of San Diego]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Ai_San_Diego"><![CDATA[Ai San Diego]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000065"><div>An AI startup that seeks to improve patient outcomes by rapidly discovering hidden patterns in life sciences data has named a UC San Diego Health physician and professor to its advisory board.</div><div><br></div><div><span class="imUl cf1"><a href="https://providers.ucsd.edu/details/11754/urology-cancer-surgery" target="_blank" class="imCssLink">Dr. A. Karim Kader</a></span>, a professor of urology, has joined Carlsbad-based <span class="imUl cf1"><a href="https://limmi.io/" target="_blank" class="imCssLink">Limmi’s</a></span> advisory board as chairman.</div><div>In addition to Dr. Kader’s work at UCSD, he also spent 15 years building Stratify Genomics, a biotech company founded to commercialize genetic tests to determine a patient’s risk of developing prostate cancer. </div><div><br></div><div>Limmi, founded in 2022, utilizes cutting-edge artificial intelligence and deep learning methodologies on a platform built specifically for healthcare and biotech, offering customers real-time predictions and insights to patients.</div><div><br></div><div>The aim? “Radically changing the way their customers manage healthcare operations and patient health management,” according to a news release from Limmi.</div><div><br></div><div><aside></aside></div><div><aside></aside></div><div>“We’re in a desperate need at this time for solutions like the one Limmi is offering. We’re creating all this big data that we don’t know what to do with – because we’re only human and we can not process all of the different data points that we get as clinicians in a sensical fashion.” Kader said in the release.</div><div><br></div><div>He worked 15 years on his discovery regarding genetic risk in prostate cancer and establishing his company, an effort he thinks “I could have done in probably three to five years with Limmi’s technology – and brought this product to patients much sooner.”</div><div><br></div><div>In addition to being a board-certified urologist who specializes in detecting, treating and preventing prostate cancer, Dr. Kader is nationally recognized for his expertise in performing robot-assisted radical cystectomy and urinary diversion for patients with bladder cancer. He holds several patents for genetic discoveries related to early detection of prostate cancer.</div><div><br></div><div>“We are very pleased to welcome Dr. Karim Kader to our board and we look forward to leveraging his talent, expertise and veteran leadership as a researcher, innovator and a practicing surgeon,” states Limmi’s co-founder and Chairman of the Board, <span class="imUl cf1"><a href="https://limmi.io/about-us/#leadership" target="_blank" class="imCssLink">Trevor Vieweg</a></span>.</div></div>]]></description>
			<pubDate>Thu, 07 Mar 2024 23:40:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/in1_thumb.jpg" length="55353" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?carlsbad-based-a-i--startup-enlists-ucsd-health-s-kader-to-chair-advisory-board</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000065</guid>
		</item>
		<item>
			<title><![CDATA[Alex Padilla Touts $25.5M for San Diego Projects in Appropriations Bills]]></title>
			<author><![CDATA[Times of San Diego]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Ai_San_Diego"><![CDATA[Ai San Diego]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000066"><div><span class="cf1">California Sen. Alex Padilla</span><span class="imUl cf1"> </span>Tuesday touted more than $25.5 million in potential federal funding for 17 projects across the San Diego region in Congress’ first package of Fiscal Year 2024 appropriations bills.</div><div><br></div><div>“I am proud to have secured millions in funding for projects that will improve the quality of life across the San Diego region,” Padilla said. “As we face increasingly severe weather like the atmospheric river last month, these investments will upgrade local stormwater and sewer infrastructure to improve storm resilience and the quality of our water supply.</div><div><br></div><div>“These investments will also support safer streets, more housing, and additional tools combat wildfires,” he added.</div><div><br></div><div><aside></aside></div><div>The House and Senate will consider the bills this week ahead of the March 8 funding deadline before they are sent to the president to be signed into law.</div><div><br></div><div>Some of the projects that would be funded include:</div><div><aside></aside></div><div>— $6 million for Scripps Institution of Oceanography to further explore offshore DDT pollution in the San Pedro Basin and the impacts on marine life;</div><div>— $3.61 million to replace aging water infrastructure in Borrego Springs;</div><div>— $3.3 million for Oceanside Harbor dredging;</div><div>— $1.93 million for UC San Diego’s Wildfire Technology Commons, which seeks to prevent destruction from wildfires by using data and AI as tools for next-generation fire models and mapping;</div><div>— $1.5 million for the city of San Diego shelter expansion project;</div><div><aside></aside></div><div>— $1 million for developing a master plan for improving Oceanside’s Landes Community Center and Park.</div><div><em>— City News Service</em></div></div>]]></description>
			<pubDate>Tue, 05 Mar 2024 00:21:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/Alex-Padilla_thumb.jpg" length="46188" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?alex-padilla-touts--25-5m-for-san-diego-projects-in-appropriations-bills</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000066</guid>
		</item>
		<item>
			<title><![CDATA[Here Come the AI Worms]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000062"><div>As generative AI systems like <span class="cf1">OpenAI's ChatGPT</span> and <span class="cf1">Google's Gemini</span> become more advanced, they are increasingly being put to work. Startups and tech companies are building AI agents and ecosystems on top of the systems that can <span class="cf1">complete boring chores for you</span>: think automatically making calendar bookings and potentially <span class="cf1">buying products</span>. But as the tools are given more freedom, it also increases the potential ways they can be attacked.</div><div><br></div><div>Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.</div><div><br></div><div>Nassi, along with fellow researchers Stav Cohen and Ron Bitton, created the worm, dubbed Morris II, as a nod to the original <span class="cf1">Morris computer worm</span> that caused chaos across the internet in 1988. In a <span class="cf1">research paper and website</span> shared exclusively with WIRED, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messages—breaking some security protections in ChatGPT and Gemini in the process.</div><div><br></div><div>The research, which was undertaken in test environments and not against a publicly available email assistant, comes as <span class="cf1">large language models (LLMs)</span> are increasingly becoming multimodal, being able to generate images and <span class="cf1">video as well as text</span>. While generative AI worms haven’t been spotted in the wild yet, multiple researchers say they are a security risk that startups, developers, and tech companies should be concerned about.</div><div><br></div><div>Most generative AI systems work by being fed prompts—text instructions that tell the tools to answer a question or create an image. However, these prompts can also be weaponized against the system. <span class="cf1">Jailbreaks</span> can make a system disregard its safety rules and spew out toxic or hateful content, while <span class="cf1">prompt injection attacks</span> can give a chatbot secret instructions. For example, an attacker may hide text on a webpage <span class="cf1">telling an LLM to act as a scammer and ask for your bank details</span>.</div><div><br></div><div>To create the generative AI worm, the researchers turned to a so-called “adversarial self-replicating prompt.” This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional <span class="cf1">SQL injection and buffer overflow attacks</span>, the researchers say.</div><div><br></div><div>To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, <span class="cf1">LLaVA</span>. They then found two ways to exploit the system—by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.</div><div><br></div><div><div class="imTACenter"><iframe width="678" height="382" src="https://www.youtube.com/embed/FL3qHH02Yd4" title="ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></div></div><div><br></div><div><div><span class="fs14lh1-5 cf1 ff1">In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which “poisons” the database of an email assistant using</span><span class="fs14lh1-5 cf1 ff1"> </span><span class="fs14lh1-5 cf1 ff1">retrieval-augmented generation (RAG)</span><span class="fs14lh1-5 cf1 ff1">, a way for LLMs to pull in extra data from outside its system. When the email is retrieved by the RAG, in response to a user query, and is sent to GPT-4 or Gemini Pro to create an answer, it “jailbreaks the GenAI service” and ultimately steals data from the emails, Nassi says. “The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client,” Nassi says.</span></div><div><span class="fs14lh1-5 cf1 ff1">In the second method, the researchers say, an image with a malicious prompt embedded makes the email assistant forward the message on to others. “By encoding the self-replicating prompt into the image, any kind of image containing spam, abuse material, or even propaganda can be forwarded further to new clients after the initial email has been sent,” Nassi says.</span></div><div><span class="fs14lh1-5 cf1 ff1"><br></span></div><div><span class="fs14lh1-5 cf1 ff1">In a video demonstrating the research, the email system can be seen forwarding a message multiple times. The researchers also say they could extract data from emails. “It can be names, it can be telephone numbers, credit card numbers, SSN, anything that is considered confidential,” Nassi says.</span></div><div><span class="fs14lh1-5 cf1 ff1"><br></span></div><div><span class="fs14lh1-5 cf1 ff1">Although the research breaks some of the safety measures of ChatGPT and Gemini, the researchers say the work is a warning about “bad architecture design” within the wider AI ecosystem. Nevertheless, they reported their findings to Google and OpenAI. “They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn't been checked or filtered,” a spokesperson for OpenAI says, adding that the company is working to make its systems “more resilient” and saying developers should “use methods that ensure they are not working with harmful input.” Google declined to comment on the research. Messages Nassi shared with WIRED show the company’s researchers requested a meeting to talk about the subject.</span></div><div><br></div><div><span class="fs14lh1-5 cf1 ff1">While the demonstration of the worm takes place in a largely controlled environment, multiple security experts who reviewed the research say that the future risk of generative AI worms is one that developers should take seriously. This particularly applies when AI applications are given permission to take actions on someone’s behalf—such as sending emails or booking appointments—and when they may be linked up to other AI agents to complete these tasks. In other recent research, security researchers from Singapore and China have shown how they could</span><span class="fs14lh1-5 cf1 ff1"> </span><span class="fs14lh1-5 cf1 ff1">jailbreak 1 million LLM agents in under five minutes</span><span class="fs14lh1-5 cf1 ff1">.</span></div><div><span class="fs14lh1-5 cf1 ff1"><br></span></div><div><span class="fs14lh1-5 cf1 ff1">Sahar Abdelnabi, a researcher at the CISPA Helmholtz Center for Information Security in Germany, who worked on some of the first</span><span class="fs14lh1-5 cf1 ff1"> </span><span class="fs14lh1-5 cf1 ff1">demonstrations of prompt injections against LLMs in May 2023 and highlighted that worms</span><span class="fs14lh1-5 cf1 ff1"> </span><span class="fs14lh1-5 cf1 ff1">may be possible, says that when AI models take in data from external sources or the AI agents can work autonomously, there is the chance of worms spreading. “I think the idea of spreading injections is very plausible,” Abdelnabi says. “It all depends on what kind of applications these models are used in.” Abdelnabi says that while this kind of attack is simulated at the moment, it may not be theoretical for long.</span></div><div><span class="fs14lh1-5 cf1 ff1"><br></span></div><div><span class="fs14lh1-5 cf1 ff1">In a paper covering their findings, Nassi and the other researchers say they anticipate seeing generative AI worms in the wild in the next two to three years. “GenAI ecosystems are under massive development by many companies in the industry that integrate GenAI capabilities into their cars, smartphones, and operating systems,” the research paper says.</span></div></div></div>]]></description>
			<pubDate>Fri, 01 Mar 2024 12:21:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/security_thumb.jpg" length="469551" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?here-come-the-ai-worms</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000062</guid>
		</item>
		<item>
			<title><![CDATA[Startup accelerates progress toward light-speed computing]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000061"><div>Our ability to cram ever-smaller transistors onto a chip has enabled today’s age of ubiquitous computing. But that approach is finally running into limits, with some experts <span class="cf1">declaring an end to Moore’s Law</span> and a related principle, known as Dennard’s Scaling.</div><div><br></div><div>Those developments couldn’t be coming at a worse time. Demand for computing power has skyrocketed in recent years thanks in large part to the rise of artificial intelligence, and it shows no signs of slowing down.</div><div><br></div><div>Now Lightmatter, a company founded by three MIT alumni, is continuing the remarkable progress of computing by rethinking the lifeblood of the chip. Instead of relying solely on electricity, the company also uses light for data processing and transport. The company’s first two products, a chip specializing in artificial intelligence operations and an interconnect that facilitates data transfer between chips, use both photons and electrons to drive more efficient operations.</div><div><br></div><div>“The two problems we are solving are ‘How do chips talk?’ and ‘How do you do these [AI] calculations?’” Lightmatter co-founder and CEO Nicholas Harris PhD ’17 says. “With our first two products, Envise and Passage, we’re addressing both of those questions.”</div><div><br></div><div>In a nod to the size of the problem and the demand for AI, Lightmatter raised just north of $300 million in 2023 at a valuation of $1.2 billion. Now the company is demonstrating its technology with some of the largest technology companies in the world in hopes of reducing the massive energy demand of data centers and AI models.</div><div><br></div><div>"We’re going to enable platforms on top of our interconnect technology that are made up of hundreds of thousands of next-generation compute units,” Harris says. “That simply wouldn’t be possible without the technology that we’re building.”</div><div><br></div><div><strong>From idea to $100K</strong></div><div>Prior to MIT, Harris worked at the semiconductor company Micron Technology, where he studied the fundamental devices behind integrated chips. The experience made him see how the traditional approach for improving computer performance — cramming more transistors onto each chip — was hitting its limits.</div><div><br></div><div>“I saw how the roadmap for computing was slowing, and I wanted to figure out how I could continue it,” Harris says. “What approaches can augment computers? Quantum computing and photonics were two of those pathways.”</div><div><br></div><div>Harris came to MIT to work on photonic quantum computing for his PhD under Dirk Englund, an associate professor in the Department of Electrical Engineering and Computer Science. As part of that work, he built silicon-based integrated photonic chips that could send and process information using light instead of electricity.</div><div><br></div><div>The work led to dozens of patents and more than 80 research papers in prestigious journals like <em>Nature</em>. But another technology also caught Harris’s attention at MIT.</div><div><br></div><div>“I remember walking down the hall and seeing students just piling out of these auditorium-sized classrooms, watching relayed live videos of lectures to see professors teach deep learning,” Harris recalls, referring to the artificial intelligence technique. “Everybody on campus knew that deep learning was going to be a huge deal, so I started learning more about it, and we realized that the systems I was building for photonic quantum computing could actually be leveraged to do deep learning.”</div><div><br></div><div>Harris had planned to become a professor after his PhD, but he realized he could attract more funding and innovate more quickly through a startup, so he teamed up with Darius Bunandar PhD ’18, who was also studying in Englund’s lab, and Thomas Graham MBA ’18. The co-founders successfully launched into the startup world by <span class="cf1">winning</span> the 2017 MIT $100K Entrepreneurship Competition.</div><div><br></div><div><strong>Seeing the light</strong></div><div>Lightmatter’s Envise chip takes the part of computing that electrons do well, like memory, and combines it with what light does well, like performing the massive matrix multiplications of deep-learning models.</div><div><br></div><div>“With photonics, you can perform multiple calculations at the same time because the data is coming in on different colors of light,” Harris explains. “In one color, you could have a photo of a dog. In another color, you could have a photo of a cat. In another color, maybe a tree, and you could have all three of those operations going through the same optical computing unit, this matrix accelerator, at the same time. That drives up operations per area, and it reuses the hardware that's there, driving up energy efficiency.”</div><div><br></div><div>Passage takes advantage of light’s latency and bandwidth advantages to link processors in a manner similar to how fiber optic cables use light to send data over long distances. It also enables chips as big as entire wafers to act as a single processor. Sending information between chips is central to running the massive server farms that power cloud computing and run AI systems like ChatGPT.</div><div><br></div><div>Both products are designed to bring energy efficiencies to computing, which Harris says are needed to keep up with rising demand without bringing huge increases in power consumption.</div><div><br></div><div>“By 2040, some predict that around 80 percent of all energy usage on the planet will be devoted to data centers and computing, and AI is going to be a huge fraction of that,” Harris says. “When you look at computing deployments for training these large AI models, they’re headed toward using hundreds of megawatts. Their power usage is on the scale of cities.”</div><div><br></div><div>Lightmatter is currently working with chipmakers and cloud service providers for mass deployment. Harris notes that because the company’s equipment runs on silicon, it can be produced by existing semiconductor fabrication facilities without massive changes in process.</div><div><br></div><div>The ambitious plans are designed to open up a new path forward for computing that would have huge implications for the environment and economy.</div><div><br></div><div>“We’re going to continue looking at all of the pieces of computers to figure out where light can accelerate them, make them more energy efficient, and faster, and we’re going to continue to replace those parts,” Harris says. “Right now, we’re focused on interconnect with Passage and on compute with Envise. But over time, we’re going to build out the next generation of computers, and it’s all going to be centered around light.”</div></div>]]></description>
			<pubDate>Thu, 29 Feb 2024 12:14:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/MIT-Lightmatter-01_0_thumb.jpg" length="238859" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?startup-accelerates-progress-toward-light-speed-computing</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000061</guid>
		</item>
		<item>
			<title><![CDATA[UK’s enemies could use AI deepfakes to try to rig election, says James Cleverly]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000005E"><div>Criminals and “malign actors” working on behalf of malicious states could use AI-generated “deepfakes” to hijack the general election, the home secretary has said.</div><div><br></div><div>James Cleverly was speaking before meetings with social media bosses and said the rapid advancement of technology could pose a serious threat to elections across the globe.</div><div><br></div><div><gu-island name="SignInGateSelector" priority="feature" deferuntil="visible" props="{&quot;contentType&quot;:&quot;Article&quot;,&quot;sectionId&quot;:&quot;technology&quot;,&quot;tags&quot;:[{&quot;id&quot;:&quot;technology/artificialintelligenceai&quot;,&quot;type&quot;:&quot;Keyword&quot;,&quot;title&quot;:&quot;Artificial intelligence (AI)&quot;},{&quot;id&quot;:&quot;uk/uk&quot;,&quot;type&quot;:&quot;Keyword&quot;,&quot;title&quot;:&quot;UK news&quot;},{&quot;id&quot;:&quot;politics/james-cleverly&quot;,&quot;type&quot;:&quot;Keyword&quot;,&quot;title&quot;:&quot;James Cleverly&quot;},{&quot;id&quot;:&quot;politics/politics&quot;,&quot;type&quot;:&quot;Keyword&quot;,&quot;title&quot;:&quot;Politics&quot;},{&quot;id&quot;:&quot;type/article&quot;,&quot;type&quot;:&quot;Type&quot;,&quot;title&quot;:&quot;Article&quot;},{&quot;id&quot;:&quot;tone/news&quot;,&quot;type&quot;:&quot;Tone&quot;,&quot;title&quot;:&quot;News&quot;},{&quot;id&quot;:&quot;profile/tom-ambrose&quot;,&quot;type&quot;:&quot;Contributor&quot;,&quot;title&quot;:&quot;Tom Ambrose&quot;},{&quot;id&quot;:&quot;publication/theguardian&quot;,&quot;type&quot;:&quot;Publication&quot;,&quot;title&quot;:&quot;The Guardian&quot;},{&quot;id&quot;:&quot;theguardian/mainsection&quot;,&quot;type&quot;:&quot;NewspaperBook&quot;,&quot;title&quot;:&quot;Main section&quot;},{&quot;id&quot;:&quot;theguardian/mainsection/uknews&quot;,&quot;type&quot;:&quot;NewspaperBookSection&quot;,&quot;title&quot;:&quot;UK news&quot;},{&quot;id&quot;:&quot;tracking/commissioningdesk/uk-home-news&quot;,&quot;type&quot;:&quot;Tracking&quot;,&quot;title&quot;:&quot;UK Home News&quot;}],&quot;isPaidContent&quot;:false,&quot;isPreview&quot;:false,&quot;host&quot;:&quot;https://www.theguardian.com&quot;,&quot;pageId&quot;:&quot;uk-news/2024/feb/25/uks-enemies-could-use-ai-deepfakes-to-try-to-rig-election-says-james-cleverly&quot;,&quot;idUrl&quot;:&quot;https://profile.theguardian.com&quot;,&quot;switches&quot;:{&quot;prebidAppnexusUkRow&quot;:true,&quot;abSignInGateMainVariant&quot;:true,&quot;lightbox&quot;:true,&quot;ophanNext&quot;:true,&quot;commercialMetrics&quot;:true,&quot;prebidTrustx&quot;:true,&quot;scAdFreeBanner&quot;:false,&quot;adaptiveSite&quot;:true,&quot;prebidPermutiveAudience&quot;:true,&quot;compareVariantDecision&quot;:false,&quot;enableSentryReporting&quot;:true,&quot;lazyLoadContainers&quot;:true,&quot;ampArticleSwitch&quot;:true,&quot;remarketing&quot;:true,&quot;articleEndSlot&quot;:true,&quot;keyEventsCarousel&quot;:true,&quot;registerWithPhone&quot;:false,&quot;targeting&quot;:true,&quot;remoteHeader&quot;:true,&quot;slotBodyEnd&quot;:true,&quot;prebidImproveDigitalSkins&quot;:true,&quot;ampPrebidOzone&quot;:true,&quot;extendedMostPopularFronts&quot;:true,&quot;emailInlineInFooter&quot;:true,&quot;showNewPrivacyWordingOnEmailSignupEmbeds&quot;:true,&quot;deeplyRead&quot;:false,&quot;prebidAnalytics&quot;:true,&quot;extendedMostPopular&quot;:true,&quot;ampContentAbTesting&quot;:false,&quot;prebidCriteo&quot;:true,&quot;okta&quot;:false,&quot;imrWorldwide&quot;:true,&quot;acast&quot;:true,&quot;automaticFilters&quot;:true,&quot;twitterUwt&quot;:true,&quot;prebidAppnexusInvcode&quot;:true,&quot;ampPrebidPubmatic&quot;:true,&quot;a9HeaderBidding&quot;:true,&quot;prebidAppnexus&quot;:true,&quot;enableDiscussionSwitch&quot;:true,&quot;prebidXaxis&quot;:true,&quot;stickyVideos&quot;:true,&quot;interactiveFullHeaderSwitch&quot;:true,&quot;discussionAllPageSize&quot;:true,&quot;prebidUserSync&quot;:true,&quot;audioOnwardJourneySwitch&quot;:true,&quot;brazeTaylorReport&quot;:false,&quot;abConsentlessAds&quot;:true,&quot;externalVideoEmbeds&quot;:true,&quot;abIntegrateIma&quot;:true,&quot;callouts&quot;:true,&quot;sentinelLogger&quot;:true,&quot;geoMostPopular&quot;:true,&quot;weAreHiring&quot;:false,&quot;relatedContent&quot;:true,&quot;thirdPartyEmbedTracking&quot;:true,&quot;prebidOzone&quot;:true,&quot;ampLiveblogSwitch&quot;:true,&quot;ampAmazon&quot;:true,&quot;prebidAdYouLike&quot;:true,&quot;mostViewedFronts&quot;:true,&quot;discussionInApps&quot;:false,&quot;optOutAdvertising&quot;:true,&quot;abSignInGateMainControl&quot;:true,&quot;headerTopNav&quot;:true,&quot;googleSearch&quot;:true,&quot;brazeSwitch&quot;:true,&quot;darkModeInApps&quot;:true,&quot;prebidKargo&quot;:true,&quot;consentManagement&quot;:true,&quot;crosswordMobileBanner&quot;:true,&quot;personaliseSignInGateAfterCheckout&quot;:true,&quot;redplanetForAus&quot;:true,&quot;prebidSonobi&quot;:true,&quot;idProfileNavigation&quot;:true,&quot;confiantAdVerification&quot;:true,&quot;discussionAllowAnonymousRecommendsSwitch&quot;:false,&quot;dcrTagPages&quot;:true,&quot;permutive&quot;:true,&quot;comscore&quot;:true,&quot;ampPrebidCriteo&quot;:true,&quot;abMpuWhenNoEpic&quot;:false,&quot;newsletterOnwards&quot;:false,&quot;youtubeIma&quot;:true,&quot;webFonts&quot;:true,&quot;prebidImproveDigital&quot;:true,&quot;ophan&quot;:true,&quot;crosswordSvgThumbnails&quot;:true,&quot;prebidTriplelift&quot;:true,&quot;weather&quot;:true,&quot;disableAmpTest&quot;:true,&quot;prebidPubmatic&quot;:true,&quot;serverShareCounts&quot;:false,&quot;autoRefresh&quot;:true,&quot;enhanceTweets&quot;:true,&quot;prebidIndexExchange&quot;:true,&quot;prebidOpenx&quot;:true,&quot;prebidHeaderBidding&quot;:true,&quot;mobileDiscussionAds&quot;:false,&quot;idCookieRefresh&quot;:true,&quot;discussionPageSize&quot;:true,&quot;smartAppBanner&quot;:false,&quot;abPrebidKargo&quot;:false,&quot;boostGaUserTimingFidelity&quot;:false,&quot;historyTags&quot;:true,&quot;brazeContentCards&quot;:true,&quot;surveys&quot;:true,&quot;remoteBanner&quot;:true,&quot;emailSignupRecaptcha&quot;:true,&quot;prebidSmart&quot;:true,&quot;shouldLoadGoogletag&quot;:true,&quot;inizio&quot;:true}}" config="{&quot;renderingTarget&quot;:&quot;Web&quot;,&quot;darkModeAvailable&quot;:false}" data-island-status="hydrated"></gu-island></div><div>He warned that people working on behalf of states such as Russia and Iran could generate thousands of deepfakes – highly realistic hoax images and videos – to manipulate the democratic process in countries such as the UK.</div><div class="imTACenter"><img class="image-0" src="http://asianheritagesociety.org/images/Screenshot-2024-02-26-184432.png"  title="" alt="" width="793" height="493" /><br></div><div class="imTACenter"><div class="imTACenter"><span class="fs11lh1-5 cf1 ff1">James Cleverly says realistic hoax images and videos could be used by ‘foreign malign actors’ to manipulate voters</span></div></div><div class="imTACenter"><br></div><div>He told <span class="cf2">the Times</span> that “increasingly today the battle of ideas and policies takes place in the ever-changing and expanding digital sphere”, adding: “The era of deepfake and AI-generated content to mislead and disrupt is already in play.</div><div><br></div><div>“The landscape it is inserted into needs its rules, transparency and safeguards for its users. The questions asked about digital content and the sources of digital content are no less relevant than those asked about the content and sources at dispatch boxes, newsrooms or billboard ads.”</div><div>The home secretary will use meetings with Silicon Valley bosses at Google, Meta, Apple, YouTube and others to urge collective action to protect democracy.</div><div><br></div><div><div>It is estimated that 2 billion people around the world will vote in national elections throughout 2024, including in the UK, US, India and 60 other countries.</div><div><br></div><div>A number of deepfake audios imitating Keir Starmer, the Labour leader, and the mayor of London, Sadiq Khan, were shared online last year. There have been cases of deepfake BBC News videos purporting to examine Rishi Sunak’s finances.</div><div><br></div><div>It comes as <span class="cf2">major technology companies signed a pact</span> earlier this month to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.</div><div><br></div><div>Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they will respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies – including Elon Musk’s X – are signing on to the accord.</div><div><br></div><div>“Everybody recognises that no one tech company, no one government, no one civil society organisation is able to deal with the advent of this technology and its possible nefarious use on their own,” said Nick Clegg, president of global affairs for <span class="cf2">Meta</span>, the parent company of Facebook and Instagram, in an interview before the summit.</div></div></div>]]></description>
			<pubDate>Wed, 28 Feb 2024 10:58:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/Screenshot-2024-02-26-181148_thumb.png" length="406828" type="image/png" />
			<link>http://asianheritagesociety.org/blog/?uk-s-enemies-could-use-ai-deepfakes-to-try-to-rig-election,-says-james-cleverly</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000005E</guid>
		</item>
		<item>
			<title><![CDATA[New model identifies drugs that shouldn’t be taken together]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000005D"><div>Any drug that is taken orally must pass through the lining of the digestive tract. Transporter proteins found on cells that line the GI tract help with this process, but for many drugs, it’s unknown which of those transporters they use to exit the digestive tract.</div><div><br></div><div>Identifying the transporters used by specific drugs could help to improve patient treatment because if two drugs rely on the same transporter, they can interfere with each other and should not be prescribed together.</div><div><br></div><div>Researchers at MIT, Brigham and Women’s Hospital, and Duke University have now developed a multipronged strategy to identify the transporters used by different drugs. Their approach, which makes use of both tissue models and machine-learning algorithms, has already revealed that a commonly prescribed antibiotic and a blood thinner can interfere with each other.</div><div><br></div><div>“One of the challenges in modeling absorption is that drugs are subject to different transporters. This study is all about how we can model those interactions, which could help us make drugs safer and more efficacious, and predict potential toxicities that may have been difficult to predict until now,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.</div><div><br></div><div>Learning more about which transporters help drugs pass through the digestive tract could also help drug developers improve the absorbability of new drugs by adding excipients that enhance their interactions with transporters.</div><div><br></div><div>Former MIT postdocs Yunhua Shi and Daniel Reker are the lead authors of the study, which <span class="cf1">appears today</span> in <em>Nature Biomedical Engineering</em>.</div><div><br></div><div><strong>Drug transport</strong></div><div>Previous studies have identified several transporters in the GI tract that help drugs pass through the intestinal lining. Three of the most commonly used, which were the focus of the new study, are BCRP, MRP2, and PgP.</div><div><br></div><div>For this study, Traverso and his colleagues adapted a <span class="cf1">tissue model</span> they had developed in 2020 to measure a given drug’s absorbability. This experimental setup, based on pig intestinal tissue grown in the laboratory, can be used to systematically expose tissue to different drug formulations and measure how well they are absorbed.</div><div><br></div><div>To study the role of individual transporters within the tissue, the researchers used short strands of RNA called siRNA to knock down the expression of each transporter. In each section of tissue, they knocked down different combinations of transporters, which enabled them to study how each transporter interacts with many different drugs.</div><div><br></div><div>“There are a few roads that drugs can take through tissue, but you don't know which road. We can close the roads separately to figure out, if we close this road, does the drug still go through? If the answer is yes, then it’s not using that road,” Traverso says.</div><div><br></div><div>The researchers tested 23 commonly used drugs using this system, allowing them to identify transporters used by each of those drugs. Then, they trained a machine-learning model on that data, as well as data from several drug databases. The model learned to make predictions of which drugs would interact with which transporters, based on similarities between the chemical structures of the drugs.</div><div><br></div><div>Using this model, the researchers analyzed a new set of 28 currently used drugs, as well as 1,595 experimental drugs. This screen yielded nearly 2 million predictions of potential drug interactions. Among them was the prediction that doxycycline, an antibiotic, could interact with warfarin, a commonly prescribed blood-thinner. Doxycycline was also predicted to interact with digoxin, which is used to treat heart failure, levetiracetam, an antiseizure medication, and tacrolimus, an immunosuppressant.</div><div><br></div><div><strong>Identifying interactions</strong></div><div>To test those predictions, the researchers looked at data from about 50 patients who had been taking one of those three drugs when they were prescribed doxycycline. This data, which came from a patient database at Massachusetts General Hospital and Brigham and Women’s Hospital, showed that when doxycycline was given to patients already taking warfarin, the level of warfarin in the patients’ bloodstream went up, then went back down again after they stopped taking doxycycline.</div><div><br></div><div>That data also confirmed the model’s predictions that the absorption of doxycycline is affected by digoxin, levetiracetam, and tacrolimus. Only one of those drugs, tacrolimus, had been previously suspected to interact with doxycycline.</div><div><br></div><div>“These are drugs that are commonly used, and we are the first to predict this interaction using this accelerated in silico and in vitro model,” Traverso says. “This kind of approach gives you the ability to understand the potential safety implications of giving these drugs together.”</div><div>In addition to identifying potential interactions between drugs that are already in use, this approach could also be applied to drugs now in development. Using this technology, drug developers could tune the formulation of new drug molecules to prevent interactions with other drugs or improve their absorbability. Vivtex, a biotech company co-founded in 2018 by former MIT postdoc Thomas von Erlach, MIT Institute Professor Robert Langer, and Traverso to develop new oral drug delivery systems, is now pursuing that kind of drug-tuning.</div><div><br></div><div>The research was funded, in part, by the U.S. National Institutes of Health, the Department of Mechanical Engineering at MIT, and the Division of Gastroenterology at Brigham and Women’s Hospital.</div><div><br></div><div>Other authors of the paper include Langer, von Erlach, James Byrne, Ameya Kirtane, Kaitlyn Hess Jimenez, Zhuyi Wang, Natsuda Navamajiti, Cameron Young, Zachary Fralish, Zilu Zhang, Aaron Lopes, Vance Soares, Jacob Wainer, and Lei Miao.</div></div>]]></description>
			<pubDate>Tue, 27 Feb 2024 10:50:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/drugs_thumb.jpg" length="697564" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?new-model-identifies-drugs-that-shouldn-t-be-taken-together</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000005D</guid>
		</item>
		<item>
			<title><![CDATA[Outrage After Students Shared AI-Generated Nude Pics of Classmates at Middle School]]></title>
			<author><![CDATA[Times of San Diego]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Ai_San_Diego"><![CDATA[Ai San Diego]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000069"><div><span class="imUl cf1"><a href="https://www.bhusd.org/" target="_blank" class="imCssLink">Beverly Hills Unified School District</a></span> officials Monday were investigating the discovery of fake nude photos of students at a middle school that were apparently generated using artificial intelligence.</div><div><br></div><div>The photos were first detected by officials at <span class="imUl cf1"><a href="https://bvms.bhusd.org/" target="_blank" class="imCssLink">Beverly Vista Middle School</a></span> last week, according to a message sent by district and school administrators to parents and staff. According to the message, administrators were informed by students about “the creation and dissemination by other students of artificial intelligence generated (AI) images that superimposed the faces of our students onto AI-generated nude bodies.”</div><div><aside></aside></div><div>“As the investigation is progressing today, more victims are being identified,” according to the district’s message. “We are taking every measure to support those affected and to prevent any further incidents. We want to make it unequivocally clear that this behavior is unacceptable and does not reflect the values of our school community. Although we are aware of similar situations occurring all over the nation, we must act now. This behavior rises to a level that requires the entire community to work in partnership to ensure it stops immediately.”</div><div><br></div><div>It was unclear how many photos were discovered and how many students were affected. It was also unclear how the photos were circulated.</div><div>District officials urged parents to speak to their children “about this dangerous behavior” and encouraged students to “talk to your friends about how disturbing and inappropriate this manipulation of images is.”</div><div><br></div><div><aside></aside></div><div>“Collectively, we are nothing short of outraged by this behavior and we are prepared to implement the most severe disciplinary actions allowable under California Education Code,” according to the message. “Any student found to be creating, disseminating, or in possession of AI-generated images of this nature will face disciplinary actions, including, but not limited to, a recommendation for expulsion.”</div><div><br></div><div><em>City News Service contributed to this article.</em></div></div>]]></description>
			<pubDate>Mon, 26 Feb 2024 10:31:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/885_thumb.jpg" length="114027" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?outrage-after-students-shared-ai-generated-nude-pics-of-classmates-at-middle-school</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000069</guid>
		</item>
		<item>
			<title><![CDATA[A new way to let AI chatbots converse all day without crashing]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000005B"><div>When a human-AI conversation involves many rounds of continuous dialogue, the powerful large language machine-learning models that drive chatbots like ChatGPT sometimes start to collapse, causing the bots’ performance to rapidly deteriorate.</div><div><br></div><div>A team of researchers from MIT and elsewhere has pinpointed a surprising cause of this problem and developed a simple solution that enables a chatbot to maintain a nonstop conversation without crashing or slowing down.</div><div><br></div><div>Their method involves a tweak to the key-value cache (which is like a conversation memory) at the core of many large language models. In some methods, when this cache needs to hold more information than it has capacity for, the first pieces of data are bumped out. This can cause the model to fail.</div><div><br></div><div>By ensuring that these first few data points remain in memory, the researchers’ method allows a chatbot to keep chatting no matter how long the conversation goes.</div><div><br></div><div>The method, called StreamingLLM, enables a model to remain efficient even when a conversation stretches on for more than 4 million words. When compared to another method that avoids crashing by constantly recomputing part of the past conversations, StreamingLLM performed more than 22 times faster.</div><div><br></div><div>This could allow a chatbot to conduct long conversations throughout the workday without needing to be continually rebooted, enabling efficient AI assistants for tasks like copywriting, editing, or generating code.</div><div><br></div><div>“Now, with this method, we can persistently deploy these large language models. By making a chatbot that we can always chat with, and that can always respond to us based on our recent conversations, we could use these chatbots in some new applications,” says Guangxuan Xiao, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on StreamingLLM.</div><div><br></div><div>Xiao’s co-authors include his advisor, Song Han, an associate professor in EECS, a member of the MIT-IBM Watson AI Lab, and a distinguished scientist of NVIDIA; as well as Yuandong Tian, a research scientist at Meta AI; Beidi Chen, an assistant professor at Carnegie Mellon University; and senior author Mike Lewis, a research scientist at Meta AI. The work will be presented at the International Conference on Learning Representations.</div><div><br></div><div><strong>A puzzling phenomenon</strong></div><div>Large language models encode data, like words in a user query, into representations called tokens. Many models employ what is known as an attention mechanism that uses these tokens to generate new text.</div><div><br></div><div>Typically, an AI chatbot writes new text based on text it has just seen, so it stores recent tokens in memory, called a KV Cache, to use later. The attention mechanism builds a grid that includes all tokens in the cache, an “attention map” that maps out how strongly each token, or word, relates to each other token.</div><div><br></div><div>Understanding these relationships is one feature that enables large language models to generate human-like text.</div><div><br></div><div>But when the cache gets very large, the attention map can become even more massive, which slows down computation.</div><div><br></div><div>Also, if encoding content requires more tokens than the cache can hold, the model’s performance drops. For instance, one popular model can store 4,096 tokens, yet there are about 10,000 tokens in an academic paper.</div><div><br></div><div>To get around these problems, researchers employ a “sliding cache” that bumps out the oldest tokens to add new tokens. However, the model’s performance often plummets as soon as that first token is evicted, rapidly reducing the quality of newly generated words.</div><div><br></div><div>In this new paper, researchers realized that if they keep the first token in the sliding cache, the model will maintain its performance even when the cache size is exceeded.</div><div><br></div><div>But this didn’t make any sense. The first word in a novel likely has nothing to do with the last word, so why would the first word be so important for the model to generate the newest word?</div><div><br></div><div>In their new paper, the researchers also uncovered the cause of this phenomenon.</div><div><br></div><div><strong>Attention sinks</strong></div><div>Some models use a Softmax operation in their attention mechanism, which assigns a score to each token that represents how much it relates to each other token. The Softmax operation requires all attention scores to sum up to 1. Since most tokens aren’t strongly related, their attention scores are very low. The model dumps any remaining attention score in the first token.</div><div><span class="fs12lh1-5"><b><br></b></span></div><div><span class="fs12lh1-5"><b>The researchers call this first token an “attention sink.”</b></span></div><div>“We need an attention sink, and the model decides to use the first token as the attention sink because it is globally visible — every other token can see it. We found that we must always keep the attention sink in the cache to maintain the model dynamics,” Han says. </div><div><br></div><div>In building StreamingLLM, the researchers discovered that having four attention sink tokens at the beginning of the sliding cache leads to optimal performance.</div><div><br></div><div>They also found that the positional encoding of each token must stay the same, even as new tokens are added and others are bumped out. If token 5 is bumped out, token 6 must stay encoded as 6, even though it is now the fifth token in the cache.</div><div><br></div><div>By combining these two ideas, they enabled StreamingLLM to maintain a continuous conversation while outperforming a popular method that uses recomputation.</div><div><br></div><div>For instance, when the cache has 256 tokens, the recomputation method takes 63 milliseconds to decode a new token, while StreamingLLM takes 31 milliseconds. However, if the cache size grows to 4,096 tokens, recomputation requires 1,411 milliseconds for a new token, while StreamingLLM needs just 65 milliseconds.</div><div><br></div><div>“The innovative approach of StreamingLLM, centered around the attention sink mechanism, ensures stable memory usage and performance, even when processing texts up to 4 million tokens in length,” says Yang You, a presidential young professor of computer science at the National University of Singapore, who was not involved with this work. “This capability is not just impressive; it's transformative, enabling StreamingLLM to be applied across a wide array of AI applications. The performance and versatility of StreamingLLM mark it as a highly promising technology, poised to revolutionize how we approach AI-driven generation applications.”</div><div><br></div><div>Tianqi Chen, an assistant professor in the machine learning and computer science departments at Carnegie Mellon University who also was not involved with this research, agreed, saying “Streaming LLM enables the smooth extension of the conversation length of large language models. We have been using it to enable the deployment of Mistral models on iPhones with great success.”</div><div><br></div><div>The researchers also explored the use of attention sinks during model training by prepending several placeholder tokens in all training samples.</div><div>They found that training with attention sinks allowed a model to maintain performance with only one attention sink in its cache, rather than the four that are usually required to stabilize a pretrained model’s performance. </div><div><br></div><div>But while StreamingLLM enables a model to conduct a continuous conversation, the model cannot remember words that aren’t stored in the cache. In the future, the researchers plan to target this limitation by investigating methods to retrieve tokens that have been evicted or enable the model to memorize previous conversations.</div><div><br></div><div>StreamingLLM has been incorporated into NVIDIA's large language model optimization library, <span class="cf1">TensorRT-LLM</span>.</div><div><br></div><div>This work is funded, in part, by the MIT-IBM Watson AI Lab, the MIT Science Hub, and the U.S. National Science Foundation.</div></div>]]></description>
			<pubDate>Mon, 26 Feb 2024 10:10:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/chat_thumb.jpg" length="500363" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?a-new-way-to-let-ai-chatbots-converse-all-day-without-crashing</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000005B</guid>
		</item>
		<item>
			<title><![CDATA[Wipro and IBM collaborate to propel enterprise AI]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000058"><div class="imTAJustify">In a bid to accelerate the adoption of AI in the enterprise sector, Wipro has unveiled its latest offering that leverages the capabilities of IBM’s watsonx AI and data platform.</div><div class="imTAJustify"><br></div><div class="imTAJustify">The extended partnership between Wipro and IBM combines the former’s extensive industry expertise with IBM’s leading AI innovations. The collaboration seeks to develop joint solutions that facilitate the implementation of robust, reliable, and enterprise-ready AI solutions.</div><div class="imTAJustify"><br></div><div class="imTAJustify">The Wipro Enterprise AI-Ready Platform harnesses various components of the IBM watsonx suite, including watsonx.ai, watsonx.data, and watsonx.governance, alongside AI assistants. It offers clients a comprehensive suite of tools, large language models (LLMs), streamlined processes, and robust governance mechanisms, laying a solid foundation for the development of future industry-specific analytic solutions.</div><div class="imTAJustify"><br></div><div class="imTAJustify">Jo Debecker, Managing Partner &amp; Global Head of Wipro FullStride Cloud, said: “This expanded partnership with IBM combines our deep contextual cloud, AI, and industry expertise with IBM’s leading AI innovation capabilities.”</div><div class="imTAJustify"><br></div><div class="imTAJustify">A key aspect of this collaboration is the establishment of the IBM TechHub@Wipro, a centralised tech hub aimed at supporting joint client pursuits. This initiative will bring together subject matter experts, engineers, assets, and processes to drive and support AI initiatives.</div><div class="imTAJustify"><br></div><div class="imTAJustify">Kate Woolley, General Manager of IBM Ecosystem, commented: “We’re pleased to reach this new milestone in our 20-year partnership to support clients through the combination of Wipro’s and IBM’s joint expertise and technology, including watsonx.”</div><div class="imTAJustify"><br></div><div class="imTAJustify">The Wipro Enterprise AI-Ready Platform offers infrastructure and core software for AI and generative AI workloads, enhancing automation, dynamic resource management, and operational efficiency in the enterprise. Moreover, it caters to specialised industry use cases, such as banking, retail, health, energy, and manufacturing, offering tailored solutions for customer support, marketing, feedback analysis, and more.</div><div class="imTAJustify"><br></div><div class="imTAJustify">Nagendra Bandaru, Managing Partner and President of Wipro Enterprise Futuring, highlighted the flexibility of the platform, stating: “Wipro’s Enterprise AI-Ready Platform will allow clients to easily integrate and standardise multiple data sources augmenting AI- and GenAI-enabled transformation across business functions.”</div><div class="imTAJustify"><br></div><div class="imTAJustify">In addition to facilitating AI governance through the AI lifecycle, the platform prioritises responsible AI practices, ensuring transparency, data protection, and compliance with relevant laws and regulations.</div><div class="imTAJustify"><br></div><div class="imTAJustify">As part of this collaboration, Wipro associates will undergo training in IBM hybrid cloud, AI, and data analytics technologies, further enhancing their capabilities in developing joint solutions.</div></div>]]></description>
			<pubDate>Sun, 25 Feb 2024 23:33:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/ibm_thumb.jpg" length="233158" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?wipro-and-ibm-collaborate-to-propel-enterprise-ai</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000058</guid>
		</item>
		<item>
			<title><![CDATA[Stability AI previews Stable Diffusion 3 text-to-image model]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000057"><div class="imTAJustify">London-based AI lab Stability AI has announced an early preview of its new text-to-image model, Stable Diffusion 3. The advanced generative AI model aims to create high-quality images from text prompts with improved performance across several key areas.</div><div class="imTAJustify"><br></div><div class="imTAJustify">The announcement comes just days after Stability AI’s largest rival, OpenAI, unveiled Sora—a brand new AI model capable of generating nearly-realistic, high-definition videos from simple text prompts.</div><div class="imTAJustify"><br></div><div class="imTAJustify">Sora, which isn’t available to the general public yet either, sparked concerns about its potential to create realistic-looking fake footage. OpenAI said it’s working with experts in misinformation and hateful content to test the tool before making it widely available. </div><div class="imTAJustify"><br></div><div class="imTAJustify">According to Stability AI, Stable Diffusion 3 has significantly better abilities for handling multi-subject image generation compared to previous versions. This allows users to include more detailed prompts with multiple elements and achieve better results. &nbsp;</div><div class="imTAJustify"><br></div><div class="imTAJustify">In addition to improvements with complex prompts, the new model boasts upgraded overall image quality and spelling accuracy. Stability AI claims these upgrades solve some consistency and coherence issues that have impacted past text-to-image models. </div><div class="imTAJustify"><br></div><div class="imTAJustify"><div>While not yet publicly available, Stability AI has opened a waitlist for people interested in early access to Stable Diffusion 3. The preview phase will allow Stability AI to gather feedback and continue refining the model before a full release planned later this year.</div><div><br></div><div class="imTACenter"><img class="image-0" src="http://asianheritagesociety.org/images/stability-ai-stable-diffusion-3-spelling-text-1536x290.png"  title="" alt="" width="870" height="164" /></div><div><br></div><div>Stability AI said it is also working with experts to test Stable Diffusion 3 and ensure it mitigates potential harms, similar to OpenAI’s approach with Sora.</div><div><br></div><div>“We believe in safe, responsible AI practices. This means we have taken and continue to take reasonable steps to prevent the misuse of Stable Diffusion 3 by bad actors. Safety starts when we begin training our model and continues throughout the testing, evaluation, and deployment,” said Stability AI.</div><div><br></div><div>“In preparation for this early preview, we’ve introduced numerous safeguards. By continually collaborating with researchers, experts, and our community, we expect to innovate further with integrity as we approach the model’s public release.”</div><div><br></div><div>Stable Diffusion 3 is being offered in a range of model sizes from 800 million parameters on the low-end to 8 billion on the high-end. Stability AI said this spectrum of options aims to balance creative performance and accessibility to users with varying computational resources. &nbsp;</div><div><br></div><div>“Our commitment to ensuring generative AI is open, safe, and universally accessible remains steadfast,” explained Stability AI.</div><div><br></div><div>“With Stable Diffusion 3, we strive to offer adaptable solutions that enable individuals, developers, and enterprises to unleash their creativity, aligning with our mission to activate humanity’s potentia</div></div></div>]]></description>
			<pubDate>Sat, 24 Feb 2024 23:15:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/st_thumb.jpg" length="316187" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?stability-ai-previews-stable-diffusion-3-text-to-image-model</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000057</guid>
		</item>
		<item>
			<title><![CDATA[Microsoft is quadrupling its AI and cloud investment in Spain]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000059"><div class="imTAJustify">Microsoft has announced plans to significantly boost its investment in AI and cloud infrastructure in Spain, with a commitment to quadruple its spending during 2024-2025 to reach $2.1 billion. This substantial increase marks the largest investment by Microsoft in Spain since its establishment in the country 37 years ago.</div><div class="imTAJustify"><br></div><div class="imTAJustify">The tech giant is set to unveil new data centres in Madrid and has outlined its intention to construct additional centres in Aragon, catering to European companies and public entities. The increased European infrastructure aims to deliver Microsoft’s cloud services with heightened security, privacy, and data sovereignty measures, facilitating access to the company’s full suite of AI solutions for businesses and public administrations in the region.</div><div class="imTAJustify"><br></div><div class="imTAJustify">According to an analysis by IDC, these new Microsoft data centres have the potential to contribute €8.4 billion to the national GDP and help to generate 69,000 jobs from 2026 to 2030.</div><div class="imTAJustify"><br></div><div class="imTAJustify">The commitment to investment aligns with a collaborative agreement forged between the President of the Government, Pedro Sánchez, and Microsoft President Brad Smith. Under this collaboration, Microsoft and the Government of Spain will collaborate on various initiatives aimed at advancing responsible AI, enhancing citizen services, and bolstering national cybersecurity and resilience across Spanish companies, public bodies, and critical infrastructures.</div><div class="imTAJustify"><br></div><div class="imTACenter"><img class="image-0" src="http://asianheritagesociety.org/images/br.jpg"  title="" alt="" width="870" height="579" /><br></div><div class="imTAJustify"><div><br></div><div class="imTALeft">This partnership operates within the framework of the National Strategy for Artificial Intelligence and the National Cybersecurity Strategy outlined by the Spanish government. It revolves around four key action points:</div><div class="imTALeft"><br></div><div><ol><li class="imTALeft"><strong><b><span class="fsNaNlh1-5 ff1">Extension of AI in public administration:</span></b></strong><span class="fsNaNlh1-5 ff1"> </span><span class="fsNaNlh1-5 ff1">Efforts will be directed towards modernising administrative processes and equipping officials with AI tools to boost efficiency. This includes deploying generative AI solutions and implementing AI training plans for officials.</span></li></ol></div><div><ol start="2"><li class="imTALeft"><strong><b><span class="fsNaNlh1-5 ff1">Promotion of responsible AI:</span></b></strong><span class="fsNaNlh1-5 ff1"> </span><span class="fsNaNlh1-5 ff1">Microsoft will share its responsible AI design standards, along with implementation guides and best practices documentation, with the</span><span class="fsNaNlh1-5 ff1"> </span><span class="fsNaNlh1-5 ff1">Spanish Agency for the Supervision of Artificial Intelligence</span><span class="fsNaNlh1-5 ff1"> </span><span class="fsNaNlh1-5 ff1">(AESIA).</span></li></ol></div><div><ol start="3"><li class="imTALeft"><strong><b><span class="fsNaNlh1-5 ff1">Strengthening national cybersecurity:</span></b></strong><span class="fsNaNlh1-5 ff1"> </span><span class="fsNaNlh1-5 ff1">Collaboration with the</span><span class="fsNaNlh1-5 ff1"> </span><span class="fsNaNlh1-5 ff1">National Cryptological Center</span><span class="fsNaNlh1-5 ff1"> </span><span class="fsNaNlh1-5 ff1">(CNI) aims to enhance early warning mechanisms and response to cybersecurity incidents in public administrations.</span></li></ol></div><div><ol start="4"><li class="imTALeft"><strong><b><span class="fsNaNlh1-5 ff1">Improving cyber-resilience of companies:</span></b></strong><span class="fsNaNlh1-5 ff1"> </span><span class="fsNaNlh1-5 ff1">Microsoft will collaborate with the</span><span class="fsNaNlh1-5 ff1"> </span><span class="fsNaNlh1-5 ff1">National Institute of Cybersecurity</span><span class="fsNaNlh1-5 ff1"> </span><span class="fsNaNlh1-5 ff1">(INCIBE) to enhance the cybersecurity posture of Spanish companies, particularly SMEs, by providing access to threat intelligence and conducting joint outreach initiatives.</span></li></ol></div><div class="imTALeft"><br></div><div class="imTALeft">Microsoft’s increased investment underscores its commitment to advancing technological innovation in Spain while fostering a secure and responsible digital ecosystem.</div></div></div>]]></description>
			<pubDate>Sat, 24 Feb 2024 00:18:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/spain_thumb.jpg" length="400412" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?microsoft-is-quadrupling-its-ai-and-cloud-investment-in-spain</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000059</guid>
		</item>
		<item>
			<title><![CDATA[Reddit is reportedly selling data for AI training]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000056"><div class="imTAJustify">Reddit has negotiated a content licensing deal to allow its data to be used for training AI models, according to a Bloomberg report.</div><div class="imTAJustify"><br></div><div class="imTAJustify">Just ahead of a potential $5 billion initial public offering (IPO) debut in March, Reddit has reportedly signed a $60 million deal with an undisclosed major AI company. This move could be seen as a last-minute effort to showcase potential revenue streams in the rapidly growing AI industry to prospective investors.</div><div class="imTAJustify"><br></div><div class="imTAJustify">Although Reddit has yet to confirm the deal, the decision could have significant implications. If true, it would mean that Reddit’s vast trove of user-generated content – including posts from popular subreddits, comments from both prominent and obscure users, and discussions on a wide range of topics – could be used to train and enhance existing large language models (LLMs) or provide the foundation for the development of new generative AI systems.</div><div class="imTAJustify"><br></div><div class="imTAJustify">However, this decision by Reddit may not sit well with its user base, as the company has faced increasing opposition from its community regarding its recent business decisions.</div><div class="imTAJustify"><br></div><div class="imTAJustify">Last year, when Reddit announced plans to start charging for access to its application programming interfaces (APIs), thousands of Reddit forums temporarily shut down in protest. Days later, a group of Reddit hackers threatened to release previously stolen site data unless the company reversed the API plan or paid a ransom of $4.5 million.</div><div class="imTAJustify"><br></div><div class="imTAJustify">Reddit has recently made other controversial decisions, such as removing years of private chat logs and messages from users’ accounts. The platform also implemented new automatic moderation features and removed the option for users to turn off personalised advertising, fuelling additional discontent among its users.</div><div class="imTAJustify"><br></div><div class="imTAJustify">This latest reported deal to sell Reddit’s data for AI training could generate even more backlash from users, as the debate over the ethics of using public data, art, and other human-created content to train AI systems continues to intensify across various industries and platforms.</div></div>]]></description>
			<pubDate>Fri, 23 Feb 2024 23:04:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/re_thumb.jpg" length="279329" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?reddit-is-reportedly-selling-data-for-ai-training</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000056</guid>
		</item>
		<item>
			<title><![CDATA[How Human-Machine Interfaces Will Change The Face Of War]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000052"><div><span class="fs12lh1-5">For years, researchers have explored the potential of brain-computer interfaces (MCIs) – systems that connect the human brain to external technology – to restore movement in people with paralyzed limbs, using arrays of electrodes implanted directly into the body. the surface of the brain.</span></div><div><br></div><div><span class="fs12lh1-5">In the future, however, research supported by the US government may allow the use of ICMs without any surgery – and they may first consider their use as a way to give soldiers an edge on the battlefield.</span></div><div><br></div><div><span class="fs12lh1-5">DARPA, the U.S. Army’s research and development unit, which launched its Next Generation (N3) nonsurgical neurotechnology program in 2018, seeks to create non-invasive, or minimally invasive, brain-computer interfaces that could allow troops to communicate with systems from air vehicles or cyber defense systems more quickly than with voice or keyboards. Additionally, soldiers could potentially pilot drones or tanks with their only thought.</span></div><div><span class="fs12lh1-5">Six funded projects</span></div><div><br></div><div><span class="fs12lh1-5">“DARPA is preparing for a future in which a combination of unmanned systems, artificial intelligence and cyber operations could cause conflicts in too short a time frame for humans to effectively manage with current technology alone,” says Al Emondi, responsible for the N3 program last year when funding for six projects was announced. “By creating a more accessible brain-machine interface, which does not require surgery to be used, DARPA could provide tools that allow mission commanders to remain meaningfully involved in dynamic operations that unfold at rapid speed. . ”</span></div><div><br></div><div><span class="fs12lh1-5">The research agency has awarded funding to six groups under the N3 program, each studying a different method for allowing humans and machines to communicate at the speed of thought, but without involving surgery. The different groups are looking at a whole range of approaches. Ultrasound, magnetic fields, light, electric fields, and optical coherence tomography (OCT) are among the technologies being researched.</span></div><div><br></div><div><span class="fs12lh1-5">Ohio-based Battelle R &amp; D is one of six groups to receive DARPA funding for a minimally invasive system that should eventually be able to collect and transmit information to soldiers’ brains. . “Imagine this: a soldier puts on a helmet and uses his only thoughts to control multiple unmanned vehicles or a demining robot,” as the company described it last year .</span></div><div><span class="fs12lh1-5">The objective of the project is “to improve the capacity of our soldiers and our fighters – to learn faster, to do things better”, specifies Patrick Ganzer, principal researcher at Battelle, to ZDNet. The Battelle system is based on nanoparticles and uses their electromagnetic properties to collect and communicate data to the wearer.</span></div><div><br></div><div><b><span class="fs12lh1-5">Challenges to overcome</span></b></div><div><span class="fs12lh1-5">According to DARPA, the main challenges in developing non-invasive, or minimally invasive MCIs, are overcoming the signal-to-noise ratio and “the complex physics of scattering and weakening of signals as they pass through the skin,” the skull and the brain tissue ”. Battelle believes that using electromagnetic waves, rather than light or ultrasound, should overcome the problem.</span></div><div><br></div><div><span class="fs12lh1-5">Once L3 participants figure out how to process the physics of ICM, says DARPA, they can proceed to codify and decode neural signals, create a unique sensing and stimulation device, and test safety and efficacy. systems in animals, then move on to experimenting with human volunteers. Although it is not yet clear how the brain might react to the introduction of thousands of nanoparticles, the use of other nanoparticles in medicine may provide some clues. Nanoparticles are already used in hospitals as a contrast medium, a substance that is injected or swallowed by patients to make certain parts of the body more clearly appear on a CT or MRI scan.</span></div><div><br></div><div><span class="fs12lh1-5">DARPA estimates that ICMs should be able to be used for two hours, but it is conceivable that real-world systems will have to be in situ for much longer to cope with long missions, possibly being re-injected or re-magnetized in order to maintain them. in place for longer term use.</span></div><div><br></div><div><b><span class="fs12lh1-5">Medical applications</span></b></div><div><span class="fs12lh1-5">Creating a system capable of operating in the harsh environment of the human body is one thing, but creating an ICM capable of handling the complexity of human thought is another. The development of the interface of a minimally invasive system is a real challenge because if you make it too simple, and it is not useful, but if it is too complex, it is difficult to handle by the user.</span></div><div><br></div><div><span class="fs12lh1-5">“There is a trade-off between the complexity of feedback and how quickly you feel it intuitively. Imagine you have a very simple feedback system. Let’s say there are four places, and each of them means something different that you learn over time. If I increase that number to eight or twenty, or something more complex, I start to put a heavy load on the user. There’s an operational hotspot where it’s easy to use, doesn’t have to learn a lot, and you don’t have to think about it – it has that natural side. Like all good technology, it simply works, ”explains Patrick Ganzer.</span></div><div><br></div><div><span class="fs12lh1-5">Much of the groundbreaking research on brain-computer interfaces is focused on medical applications. By bypassing broken connections in the pathways that lead from the brain to muscles and skin, MHIs could help overcome the paralysis and loss of sense of touch that result from strokes and spinal cord injury.</span></div><div><br></div><div><span class="fs12lh1-5">This work typically involves invasive ICMs – systems that require surgery to implant electrode arrays into the brain – but the new wave of non-invasive or minimally invasive systems may offer a non-surgical alternative in the future. The prospect of a non-surgical alternative could also make it possible to use this type of interface in a greater number of cases. According to Patrick Ganzer, besides spinal injuries and strokes, minimally invasive systems could potentially be used for epilepsy and depression.</span></div></div>]]></description>
			<pubDate>Wed, 21 Feb 2024 13:22:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/brain_computer_interface_thumb.jpg" length="213248" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?how-human-machine-interfaces-will-change-the-face-of-war</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000052</guid>
		</item>
		<item>
			<title><![CDATA[AI Projects Adopted In Business Mobilize Up To $ 20 Million Per Year]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000051"><div><span class="fs12lh1-5">According to a Deloitte survey, 53% of companies adopting artificial intelligence spent more than $ 20 million in the last year on technology and talent.</span></div><div><br></div><div><span class="fs12lh1-5">The State of AI in Business Survey , based on 2,737 IT and industry executives, highlights how quickly AI applications are entering production. Of those surveyed by Deloitte, 26% are “seasoned adopters”, 47% “skilled adopters” and 27% “newbies”. Respondents were ranked based on AI adoption and systems put into production.</span></div><div><br></div><div><span class="fs12lh1-5">According to the research firm, 68% of seasoned adopters spent more than $ 20 million in the past year on AI. In addition, 81% of them confirmed a return on investment in less than two years.</span></div><div><br></div><div><b><span class="fs12lh1-5">Improve decision making</span></b></div><div><span class="fs12lh1-5">Regarding the technological range of AI, 67% of respondents today use machine learning, 97% plan to do so, 54% use deep learning and 58% natural language processing, says Deloitte.</span></div><div><br></div><div><span class="fs12lh1-5">These AI enthusiasts see more efficient processes as the primary rationale for deployments, with improved decision-making also a key goal. AI adopters also typically buy more technology than they build, but only 47% of those surveyed said they use suppliers, Deloitte suggests.</span></div><div><br></div><div><strong><b><span class="fs12lh1-5">Other key results include:</span></b></strong></div><div><ul><li><span class="fs12lh1-5">45% said they had a high level of proficiency in integrating AI technology into their existing IT environment.</span></li><li><span class="fs12lh1-5">93% use AI in the cloud, 78% use open-source AI.</span></li><li><span class="fs12lh1-5">61% said they believe AI will dramatically transform their industry.</span></li><li><span class="fs12lh1-5">62% said they were very concerned about AI-related cybersecurity vulnerabilities, followed by failures impacting business operations and the use of personal data without consent. Responsibility and regulatory developments are also major concerns.</span></li><li><span class="fs12lh1-5">95% of those surveyed are concerned about the ethical risks associated with deploying AI.</span></li><li><span class="fs12lh1-5">62% of respondents believe that AI technologies should be heavily regulated.</span></li><li><span class="fs12lh1-5">The main beneficiaries of the success of artificial intelligence are the IT departments themselves.</span></li></ul></div></div>]]></description>
			<pubDate>Mon, 19 Feb 2024 12:55:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/MIT-SelfSupervisedLearning-01_0_thumb_tqk65369.jpg" length="53486" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?ai-projects-adopted-in-business-mobilize-up-to---20-million-per-year</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000051</guid>
		</item>
		<item>
			<title><![CDATA[OpenAI's new text-to-video tool, Sora, has one artificial intelligence expert "terrified"]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000004E"><div>Another groundbreaking generative artificial intelligence tool from the company behind ChatGPT unveiled Thursday is expected to accelerate the proliferation of deepfake videos and have implications for virtually every industry. </div><div>Sora, an AI application that takes written prompts and turns them into original videos, is already so powerful that one AI expert says it has him "terrified." </div><div><br></div><div class="imTACenter"><img class="image-0" src="http://asianheritagesociety.org/images/oai2.jpg"  title="" alt="" width="840" height="559" /><br></div><div><br></div><div>"Generative AI tools are evolving so rapidly, and we have social network — which leads to an Achilles heel in our democracy and it couldn't have happened at a worse time," Oren Etzioni, founder of TruMedia.org, told CBS MoneyWatch. The nonprofit organization dedicated to fighting AI-based disinformation in political campaigns focuses on identifying manipulated media, including <span class="cf1">so-called <a href="https://www.youtube.com/watch?v=3wVpVH0Wa6E" onclick="return x5engine.imShowBox({ media:[{type: 'youtube', url: 'https://www.youtube.com/watch?v=3wVpVH0Wa6E', width: 1920, height: 1080, text: '', 'showVideoControls': true }]}, 0, this);" class="imCssLink">deepfake videos</a></span>. </div><div><br></div><div>"As we're trying to sort this out we're coming up against one of the most consequential elections in history," he added, referring to the 2024 presidential election. </div><div><br></div><div>Sora maker OpenAI <span class="cf1">shared</span> a teaser of its text-to-video model on X, explaining that it can instantaneously create sophisticated, 60-second-long videos "featuring highly detailed scenes, complex camera motion and multiple characters with vibrant emotions."</div><div><br></div><div>The tool is not yet publicly available. For the time being, OpenAI has restricted its use to "red teamers" and some visual artists, designers and filmmakers to test the product and deliver feedback to the company before it's released more widely. </div><div>Safety experts will evaluate the tool to understand how it could potentially create misinformation and hateful content, OpenAI said.</div><div><br></div><div><span class="fs12lh1-5"><b>Landing soon</b></span></div><div>Advances in technology have seemingly outpaced checks and balances on these kinds of tools, according to Etzioni, who believes in using AI for good and with guardrails in place. </div><div><br></div><div>"We're trying to build this airplane as we're flying it, and it's going to land in November if not before — and we don't have the Federal Aviation Administration, we don't have the history and we don't have the tools in place to do this," he said. </div><div>All that's stopping the tool from becoming widely available is the company itself, Etzioni said, adding that he's confident Sora, or a similar technology from an OpenAI competitor, will be released to the public in the coming months. </div><div>Of course, any ordinary citizen can be affected by a deepfake scam, in addition to celebrity targets. </div><div><br></div><div>"And [Sora] will make it even easier for malicious actors to generate high-quality video deepfakes, and give them greater flexibility to create videos that could be used for offensive purposes," &nbsp;Dr. Andrew Newell, chief scientific officer for identify verification firm, iProov, told CBS MoneyWatch. </div><div><br></div><div>This puts the onus on organizations, like banks, to develop their own AI-based tools to protect consumers against potential threats. </div><div><br></div><div>Banks that rely on video authentication security measures are most exposed, he added. </div><div><br></div><div><span class="fs12lh1-5"><b>Threat to actors, creators</b></span></div><div>The tool's capabilities are most closely related to skills of workers in content creation, including filmmaking, media and more. </div><div><br></div><div>"Voice actors or people who make short videos for video games, education purposes or ads will be the most immediately affected," he said. </div><div><br></div><div>"For professions like marketing or creative, multimodal models could be a game changer and could create significant cost savings for film and television makers, and may contribute to the proliferation of AI-generated content rather than using actors," Reece Hayden, senior analyst at ABI Research, a tech intelligence company, told CBS MoneyWatch.</div><div>Given that it makes it easier for anyone — even those without artistic ability — to create visual content, Sora could let users develop choose-your-own-adventure-style media. </div><div><br></div><div>Even a major player like "Netflix could enable end users to develop their own content based on prompts," Hayden said. </div></div>]]></description>
			<pubDate>Sat, 17 Feb 2024 01:06:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/oai_thumb.jpg" length="129037" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?openai-s-new-text-to-video-tool,-sora,-has-one-artificial-intelligence-expert--terrified-</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000004E</guid>
		</item>
		<item>
			<title><![CDATA[OpenAI’s Sora Turns AI Prompts Into Photorealistic Videos]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000004D"><div><span class="fs12lh1-5"><span class="cf1">We already know that</span><span class="cf1"> </span><span class="cf1">OpenAI’s chatbots</span><span class="cf1"> </span><span class="cf1">can</span><span class="cf1"> </span><span class="cf1">pass the bar exam</span><span class="cf1"> </span><span class="cf1">without going to law school. Now, just in time for the Oscars, a new OpenAI app called Sora hopes to master cinema without going to film school. For now a research product, Sora is going out to a few select creators and a number of security experts who will red-team it for safety vulnerabilities. OpenAI plans to make it available to all wannabe auteurs at some unspecified date, but it decided to preview it in advance.</span></span></div><div><span class="fs12lh1-5 cf1"><br></span></div><div><div><span class="fs12lh1-5">Other companies, from giants like <span class="cf1">Google</span> to startups like <span class="cf1">Runway</span>, have already revealed <span class="cf1">text-to-video AI projects</span>. But OpenAI says that Sora is distinguished by its striking photorealism—something I haven’t seen in its competitors—and its ability to produce longer clips than the brief snippets other models typically do, up to one minute. The researchers I spoke to won’t say how long it takes to render all that video, but when pressed, they described it as more in the “going out for a burrito” ballpark than “taking a few days off.” If the hand-picked examples I saw are to be believed, the effort is worth it.</span></div><div><span class="fs12lh1-5">OpenAI didn’t let me enter my own prompts, but it shared four instances of Sora’s power. (None approached the purported one-minute limit; the longest was 17 seconds.) The first came from a detailed prompt that sounded like an obsessive screenwriter’s setup: “Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes.”</span></div></div><div><span class="fs12lh1-5"><br></span></div><div class="imTACenter"><a href="https://media.wired.com/clips/65cd609a1b47a15ce1b4001e/360p/pass/tokyo.mp4" onclick="return x5engine.imShowBox({ media:[{type: 'video', url: 'https://media.wired.com/clips/65cd609a1b47a15ce1b4001e/360p/pass/tokyo.mp4', width: 1920, height: 1080, description: ''}]}, 0, this);" class="imCssLink inline-block"><img class="image-0" src="http://asianheritagesociety.org/images/oa.jpg"  title="" alt="" width="870" height="489" /><br></a></div><div><div class="imTACenter"><a href="https://media.wired.com/clips/65cd609a1b47a15ce1b4001e/360p/pass/tokyo.mp4" onclick="return x5engine.imShowBox({ media:[{type: 'video', url: 'https://media.wired.com/clips/65cd609a1b47a15ce1b4001e/360p/pass/tokyo.mp4', width: 1920, height: 1080, description: ''}]}, 0, this);" class="imCssLink"><span class="fs9lh1-5 cf1 ff1">AI-generated video made with OpenAI's Sora.</span><span class="fs9lh1-5 cf1 ff1"> </span><span class="fs8lh1-5 cf2 ff2">COURTESY OF OPENAI</span></a></div></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">The result is a convincing view of what is unmistakably Tokyo, in that magic moment when snowflakes and cherry blossoms coexist. The virtual camera, as if affixed to a drone, follows a couple as they slowly stroll through a streetscape. One of the passersby is wearing a mask. Cars rumble by on a riverside roadway to their left, and to the right shoppers flit in and out of a row of tiny shops.</span><br></div><div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">It’s not perfect. Only when you watch the clip a few times do you realize that the main characters—a couple strolling down the snow-covered sidewalk—would have faced a dilemma had the virtual camera kept running. The sidewalk they occupy seems to dead-end; they would have had to step over a small guardrail to a weird parallel walkway on their right. Despite this mild glitch, the Tokyo example is a mind-blowing exercise in world-building. Down the road, production designers will debate whether it’s a powerful collaborator or a job killer. Also, the people in this video—who are entirely generated by a digital neural network—aren’t shown in close-up, and they don’t do any emoting. But the Sora team says that in other instances they’ve had fake actors showing real emotions.</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">The other clips are also impressive, notably one asking for “an animated scene of a short fluffy monster kneeling beside a red candle,” along with some detailed stage directions (“wide eyes and open mouth”) and a description of the desired vibe of the clip. Sora produces a Pixar-esque creature that seems to have DNA from a Furby, a Gremlin, and Sully in Monsters, Inc. I remember when that latter film came out, Pixar made a huge deal of how difficult it was to create the <span class="cf1">ultra-complex texture of a monster’s fur</span> as the creature moved around. It took all of Pixar’s wizards months to get it right. OpenAI’s new text-to-video machine … just did it.</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">“It learns about 3D geometry and consistency,” says Tim Brooks, a research scientist on the project, of that accomplishment. “We didn’t bake that in—it just entirely emerged from seeing a lot of data.”</span></div></div><div><span class="fs12lh1-5"><br></span></div><div class="imTACenter"><a href="https://media.wired.com/clips/65cd6097640589f91cb00713/360p/pass/monster.mp4" onclick="return x5engine.imShowBox({ media:[{type: 'video', url: 'https://media.wired.com/clips/65cd6097640589f91cb00713/360p/pass/monster.mp4', width: 1920, height: 1080, description: ''}]}, 0, this);" class="imCssLink inline-block"><img class="image-1" src="http://asianheritagesociety.org/images/oa2.jpg"  title="" alt="" width="870" height="493" /><br></a></div><div><div><div class="imTACenter"><a href="https://media.wired.com/clips/65cd6097640589f91cb00713/360p/pass/monster.mp4" onclick="return x5engine.imShowBox({ media:[{type: 'video', url: 'https://media.wired.com/clips/65cd6097640589f91cb00713/360p/pass/monster.mp4', width: 1920, height: 1080, description: ''}]}, 0, this);" class="imCssLink"><span class="fs9lh1-5 cf1 ff1">AI-generated video made with the prompt, “animated scene features a close-up of a short fluffy monster kneeling beside a melting red candle. the art style is 3d and realistic, with a focus on lighting and texture. the mood of the painting is one of wonder and curiosity, as the monster gazes at the flame with wide eyes and open mouth. its pose and expression convey a sense of innocence and playfulness, as if it is exploring the world around it for the first time. the use of warm colors and dramatic lighting further enhances the cozy atmosphere of the image.”</span><span class="fs9lh1-5 cf1 ff1"> </span><span class="fs8lh1-5 cf2 ff2">COURTESY OF OPENAI</span></a></div><span class="fs12lh1-5"><br>While the scenes are certainly impressive, the most startling of Sora’s capabilities are those that it has not been trained for. Powered by a version of the <span class="cf1">diffusion model</span> used by OpenAI’s Dalle-3 image generator as well as the transformer-based engine of GPT-4, Sora does not merely churn out videos that fulfill the demands of the prompts, but does so in a way that shows an emergent grasp of cinematic grammar.</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">That translates into a flair for storytelling. In another video that was created off of a prompt for “a gorgeously rendered papercraft world of a coral reef, rife with colorful fish and sea creatures.” Bill Peebles, another researcher on the project, notes that Sora created a narrative thrust by its camera angles and timing. “There's actually multiple shot changes—these are not stitched together, but generated by the model in one go,” he says. “We didn’t tell it to do that, it just automatically did it.”</span></div></div><div><span class="fs12lh1-5"><br></span></div><div class="imTACenter"><a href="https://media.wired.com/clips/65cd6095b249de2eed894c4d/360p/pass/origami.mp4" onclick="return x5engine.imShowBox({ media:[{type: 'video', url: 'https://media.wired.com/clips/65cd6095b249de2eed894c4d/360p/pass/origami.mp4', width: 1920, height: 1080, description: ''}]}, 0, this);" class="imCssLink inline-block"><img class="image-2" src="http://asianheritagesociety.org/images/oa3.jpg"  title="" alt="" width="870" height="489" /><br></a></div><div><div><div class="imTACenter"><a href="https://media.wired.com/clips/65cd6095b249de2eed894c4d/360p/pass/origami.mp4" onclick="return x5engine.imShowBox({ media:[{type: 'video', url: 'https://media.wired.com/clips/65cd6095b249de2eed894c4d/360p/pass/origami.mp4', width: 1920, height: 1080, description: ''}]}, 0, this);" class="imCssLink"><span class="fs9lh1-5 cf1 ff1">AI-generated video made with the prompt “a gorgeously rendered papercraft world of a coral reef, rife with colorful fish and sea creatures.”</span><span class="fs8lh1-5 cf2 ff2">COURTESY OF OPENAI</span></a></div></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">In another example I didn’t view, Sora was prompted to give a tour of a zoo. “It started off with the name of the zoo on a big sign, gradually panned down, and then had a number of shot changes to show the different animals that live at the zoo,” says Peebles, “It did it in a nice and cinematic way that it hadn't been explicitly instructed to do.”</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">One feature in Sora that the OpenAI team didn’t show, and may not release for quite a while, is the ability to generate videos from a single image or a sequence of frames. “This is going to be another really cool way to improve storytelling capabilities,” says Brooks. “You can draw exactly what you have on your mind and then animate it to life.” OpenAI is aware that this feature also has the potential to produce deepfakes and misinformation. “We’re going to be very careful about all the safety implications for this,” Peebles adds</span></div></div><div><span class="fs12lh1-5"><br></span></div><div><div><span class="fs12lh1-5">Expect Sora to have the same restrictions on content as Dall-E 3 : no violence, no porn, no appropriating real people or the style of named artists. Also as with Dall-E 3, OpenAI will provide a way for viewers to identify the output as AI-created. Even so, OpenAI says that safety and veracity is an ongoing problem that's bigger than one company. “The solution to misinformation will involve some level of mitigations on our part, but it will also need understanding from society and for social media networks to adapt as well,” says Aditya Ramesh, lead researcher and head of the Dall-E team.</span></div><div><span class="fs12lh1-5"><br></span></div><div class="imTACenter"><a href="https://media.wired.com/clips/65cd609bb4d2e54e7c66f6b7/360p/pass/mammoth.mp4" onclick="return x5engine.imShowBox({ media:[{type: 'video', url: 'https://media.wired.com/clips/65cd609bb4d2e54e7c66f6b7/360p/pass/mammoth.mp4', width: 1920, height: 1080, description: ''}]}, 0, this);" class="imCssLink inline-block"><img class="image-3" src="http://asianheritagesociety.org/images/oa4.jpg"  title="" alt="" width="870" height="489" /><br></a></div><div><div><div class="imTACenter"><a href="https://media.wired.com/clips/65cd609bb4d2e54e7c66f6b7/360p/pass/mammoth.mp4" onclick="return x5engine.imShowBox({ media:[{type: 'video', url: 'https://media.wired.com/clips/65cd609bb4d2e54e7c66f6b7/360p/pass/mammoth.mp4', width: 1920, height: 1080, description: ''}]}, 0, this);" class="imCssLink"><span class="fs9lh1-5 cf1 ff1">AI-generated video made with the prompt “several giant wooly mammoths approach treading through a snowy meadow, their long wooly fur lightly blows in the wind as they walk, snow covered trees and dramatic snow capped mountains in the distance, mid afternoon light with wispy clouds and a sun high in the distance creates a warm glow, the low camera view is stunning capturing the large furry mammal with beautiful photography, depth of field.”</span><span class="fs8lh1-5 cf2 ff2">COURTESY OF OPENAI</span></a></div></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">Another potential issue is whether the content of the video Sora produces will infringe on the copyrighted work of others. “The training data is from content we’ve licensed and also publicly available content,” says Peebles. Of course, the nub of a <span class="cf1">number of lawsuits against OpenAI</span> hinges on the question whether “publicly available” copyrighted content is fair game for AI training.</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">It will be a very long time, if ever, before text-to-video threatens actual filmmaking. No, you can’t make coherent movies by stitching together 120 of the minute-long Sora clips, since the model won’t respond to prompts in the exact same way—continuity isn’t possible. But the time limit is no barrier for Sora and programs like it to transform TikTok, Reels, and other social platforms. “In order to make a professional movie, you need so much expensive equipment,” says Peebles. “This model is going to empower the average person making videos on social media to make very high-quality content.”</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">As for now, OpenAI is faced with the huge task of making sure that Sora isn’t a misinformation train wreck. But after that, the long countdown begins until the next Christopher Nolan or Celine Song gets a statuette for wizardry in prompting an AI model. The envelope, please!</span></div></div></div><div><div data-event-boundary="click" data-event-click="{"pattern":"CNEInterludeEmbed"}" data-in-view="{"pattern":"CNEInterludeEmbed"}" data-include-experiments="true"><figure data-testid="cne-interlude-container-right-rail"><div></div><div></div><div></div></figure></div></div></div>]]></description>
			<pubDate>Thu, 15 Feb 2024 22:49:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/oa2_thumb.jpg" length="402969" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?openai-s-sora-turns-ai-prompts-into-photorealistic-videos</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000004D</guid>
		</item>
		<item>
			<title><![CDATA[Embracing The Future of Marketing With AI]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000004A"><div><span class="fs12lh1-5"><b>Embracing The Future of Marketing With AI</b></span></div><div><span class="fs12lh1-5"><br></span></div>

<div><span class="fs12lh1-5">Last year was one of dramatic transformation across all
industries, but as we enter 2024, global businesses are
still assessing how they can best utilize the technology that's taken the world
by storm—AI. Marketing is in the same boat as the rest. In a time when
marketing teams carefully allocate time and effort to achieve the most bang for
their buck, mastering the dynamic interplay of AI and marketing isn't just a
strategic advantage but a business-critical necessity.</span></div>

<div><span class="fs12lh1-5"> </span></div>

<div><span class="fs12lh1-5">Advancements in AI and its subsets, machine learning (ML)
and generative AI, have prompted the industry to reassess where this technology
can be implemented to drive efficiency, save valuable time and improve return
on investment (ROI). At Adjust, we've seen firsthand the impact of ML on
predictive analytics and campaign measurement and, across the industry,
generative AI has been used for the development of written content and creative
visuals. The common goal? To reduce time spent on the basics and increase focus
on strategic initiatives.</span></div>

<div><span class="fs12lh1-5"> </span></div>

<div><span class="fs12lh1-5">Less Is More</span></div>

<div><span class="fs12lh1-5">Emphasized by ongoing economic pressures, businesses are
continually searching for ways to save time and improve efficiencies. In
conversations we have with our customers, we often find that marketers need
more time to focus on strategy, creative ideas and tactics to best target their
audiences.</span></div>

<div><span class="fs12lh1-5"> </span></div>

<div><span class="fs12lh1-5">To do this, they need to spend less time on day-to-day
tasks like campaign management and data collection. As a result, most marketing
teams have embraced AI in some form to speed up these processes, either
directly through easily accessible tools like ChatGPT or indirectly through
partners or by utilizing integrations to existing software.</span></div>

<div><span class="fs12lh1-5"> </span></div>

<div><span class="fs12lh1-5">MORE FOR YOU</span></div>

<div><span class="fs12lh1-5">New Mass Gmail Rejections To Start April 2024
Google Says</span></div>

<div><span class="fs12lh1-5">Super Bowl Earns Record 123 Million
Viewers—Most-Watched TV Event Since Apollo 11</span></div>

<div><span class="fs12lh1-5">JetBlue Stock Spikes After Billionaire Carl Icahn Reveals
Nearly 10% Stake</span></div>

<div><span class="fs12lh1-5">Along with increasing efficiency by automating mundane
activities, AI technology has enabled marketing teams and their tech partners
to optimize campaign results. Measurement and analytics tools have always
played a crucial role in marketing, delivering valuable insights into
performance, user behavior and ROI. These tools establish the foundation for
data-driven decision-making, enabling marketers to refine strategies and
pinpoint the optimal audience for campaign success. What's become apparent to
our team and customers over the last year is the remarkable enhancements that
AI—specifically, ML—can bring to these tools.</span></div>

<div><span class="fs12lh1-5"> </span></div>

<div><span class="fs12lh1-5">Traditional methods of campaign measurement have
experienced a resurgence with the help of ML—namely, incrementality and media
mix modeling, or MMM. Incrementality allows marketers to understand the true
value of their marketing activities by showing the difference between the
outcomes of changed and unchanged marketing activities. MMM, on the other hand,
measures a wide variety of marketing activities with aggregated data to gain
insight into their ROI impact.</span></div>

<div><span class="fs12lh1-5"> </span></div>

<div><span class="fs12lh1-5">As well as improving these traditional methods, the latest
ML technology now extracts insights from aggregated campaign data in
significantly less time to predict usage patterns and offer recommendations for
optimizing campaigns. Previously, measurement focused on bringing together data
from a few sources and comparing results to inform future decision-making. Now,
with AI and ML, marketers can predict a user's lifetime value on days three,
seven, 14 and 30 of a campaign. This
predictive analysis, combined with the modernization of incrementality and MMM,
means marketers have a stronger understanding of how to spend their budgets to
achieve a higher return on ad spend.</span></div>

<div><span class="fs12lh1-5"> </span></div>

<div><span class="fs12lh1-5">The benefits of this are immediately apparent. For
example, when analyzing the effectiveness of out-of-home (OOH) campaigns—which,
simply put, is advertising that can be found outside of a consumer's
home—marketers used to wait six months for measurement. Now, ML-enabled
measurement delivers results within a matter of weeks, swiftly enabling
marketers to see the impact of their OOH advertisements on app usage, as well
as predicted success, and act accordingly.</span></div>

<div><span class="fs12lh1-5"> </span></div>

<div><span class="fs12lh1-5">Looking Forward</span></div>

<div><span class="fs12lh1-5">As we expect to see increased investment in AI, embracing
this technology in marketing will be vital for staying competitive in the
evolving landscape. Right now, AI allows marketers to access data that's more
refined and helps them make smarter, more informed decisions as a result.</span></div>

<div><span class="fs12lh1-5"> </span></div>

<div><span class="fs12lh1-5">However, in 2024 and beyond, we
believe the next step is to answer one critical question: How can this new tech
actually take all of that data and then help users themselves reach an informed
decision and take actionable next steps? Through the ongoing analysis of data,
advanced learning models can make strong recommendations to marketers. In less
time and with more accuracy, this can help growth marketers figure out how to
optimize their spending to reach their audience and drive results.</span></div>

<div><span class="fs12lh1-5"> </span></div>

<div><span class="fs12lh1-5">Getting Ahead</span></div>

<div><span class="fs12lh1-5">What's clear is that there's no going back to traditional
marketing and campaign measurement approaches. While 2023
was a year of transformation, 2024 and the years to follow
hold some very exciting potential. So, how do we make the most of this
opportunity?</span></div>

<div><span class="fs12lh1-5"> </span></div>

<div><span class="fs12lh1-5">Understanding and leveraging AI in a responsible and
considered way to achieve the best possible results is something we can only
achieve through effective collaboration and communication. This applies to many
aspects of AI—development, regulation, education and much more. But from a
marketing perspective, this means being in constant communication with your
tech partners.</span></div>

<div><span class="fs12lh1-5"> </span></div>

<div><span class="fs12lh1-5">In recent years, and the mobile marketing industry
particularly, we've seen a lot of collaboration. Through conversations with our
customers and partners, we're always learning new things from each other,
whether that's trends, demands or solutions.</span></div>

<div><span class="fs12lh1-5"> </span></div>

<div><span class="fs12lh1-5">Moreover, AI is a rapidly evolving field. Marketers must
stay informed about the latest trends, technologies and best practices to
harness new technology like AI to innovate and remain competitive. By staying
informed and communicating with well-versed partners, marketers should feel
equipped to take the plunge and embrace new strategies that leverage AI. No
one's getting it right on the first try, but successful teams are the ones that
keep experimenting, testing and adapting their AI campaigns.</span></div></div>]]></description>
			<pubDate>Thu, 15 Feb 2024 00:59:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/eye_thumb.jpg" length="133927" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?embracing-the-future-of-marketing-with-ai</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000004A</guid>
		</item>
		<item>
			<title><![CDATA[Not So Fast: Study Finds AI Job Displacement Likely Substantial, Yet Gradual]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000048"><div>In popular culture, the fear of AI taking over jobs often manifests in dystopian narratives in which machines replace human workers, leading to societal unrest and economic collapse. Films like "The Terminator" and "Blade Runner" depict a future where AI-driven automation results in widespread unemployment and social upheaval, reflecting deep-seated anxieties about technological advancement and its impact on the workforce. These portrayals resonate with real-world concerns about job displacement and the growing role of AI in various industries.</div><div class="imTACenter"><img class="image-0" src="http://asianheritagesociety.org/images/ter.jpg"  title="" alt="" width="880" height="631" /><br></div><div>A new study set out to address the significance – and speed – with which AI might automate tasks currently performed by workers. Titled ”Which Tasks are Cost-Effective to Automate With Computer Vision,” the study by Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory, MIT Sloan School of Management, The Productivity Institute, and IBM’s Institute for Business Value challenges conventional wisdom surrounding AI's potential impact on the economy, particularly focusing on the domain of computer vision.</div><div><br></div><div>In the age of rapid technological advancement, the fear of job displacement by machines – and the impact of AI on the economy and the workforce – has become a common refrain. As stated in McKinsey’s 2023 report The State of AI in 2023: Generative AI’s Breakout Year, “AI-related talent needs shift, and AI’s workforce effects are expected to be substantial.”</div><div><br></div><div>However, the study introduces a novel approach to understanding the economic feasibility of AI adoption, departing from previous broad-stroke models. Instead of merely speculating on the potential for AI to affect various sectors, the researchers developed an end-to-end AI task automation model. This model evaluates the technical performance required for AI systems to undertake specific tasks, the associated costs of building and deploying such systems, and the economic viability of adopting AI solutions.</div><div><br></div><div>One of the key findings of the study is that the current economic landscape does not favor widespread AI adoption in tasks involving computer vision. Only about 23% of wages paid for vision-related tasks are deemed economically viable for automation. This suggests a more gradual integration of AI into the workforce, contrary to the apocalyptic predictions of mass job displacement.</div><div><br></div><div>The researchers emphasize the importance of understanding the nuances between full task automation and partial automation. While AI has the potential to augment productivity in certain tasks, the decision to automate must be economically justified. For instance, even seemingly straightforward tasks, such as visually inspecting food quality in a small bakery, may not be cost-effective to automate due to the high upfront costs of AI systems in small companies without scale and the relatively low cost of some labor.</div><div><br></div><div><div>“The biggest contribution of our work is to take into account the costs that businesses would face when deploying AI” said Neil Thompson, Principal Investigator at MIT CSAIL and the Initiative on the Digital Economy. “This contrasts with previous work which has focused only on whether AI might technically be able to do a task. Once we take economics into account, most of the tasks that had ‘AI exposure’ turn out to be unattractive to automate, at least in the short term.”</div><div><br></div><div>Moreover, the study explores how reductions in AI system costs and the emergence of AI-as-a-service platforms could influence the pace of automation. Scalability and wider application of AI technologies could democratize access to AI solutions, benefiting smaller businesses and organizations without the need for extensive in-house resources. For example, an AI-powered tool developed by NavTech can classify diamonds without a human jeweler. Another real-world example is Nvidia’s AI platform for autonomous vehicles, which provides vehicle manufacturers with updated deployments without building the capability in-house.</div><div><br></div><div>“Our research reveals two big trends that will help determine the pace of AI adoption in the future,” Thompson said. “One is reductions in the cost of AI deployments; as we find ways to build these systems more cheaply, more applications for AI automation will become attractive. The other is the creation of platforms where AI is delivered as a service to many players in an industry. The broader customer base of these business models will increase the financial benefits of automation and so accelerate that process.”</div><div><br></div><div>As our society considers the economic impact of AI, policy development and workforce upskilling and retraining must be considered. As certain jobs become automated, there will be a growing demand for roles focused on managing and improving AI systems, as well as roles where human skills remain irreplaceable.</div><div><br></div><div>In conclusion, the MIT study provides a nuanced understanding of AI's impact on the labor market. By meticulously assessing the technical, economic, and societal factors involved in AI adoption, the study offers valuable insights for policymakers, businesses, and workers navigating the challenges and opportunities presented by the integration of AI into the workplace. As AI continues to reshape industries, this research serves as a pivotal reference for guiding future explorations and policymaking in this rapidly evolving landscape.</div></div></div>]]></description>
			<pubDate>Tue, 13 Feb 2024 23:04:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/ter_thumb.jpg" length="154960" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?not-so-fast--study-finds-ai-job-displacement-likely-substantial,-yet-gradual</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000048</guid>
		</item>
		<item>
			<title><![CDATA[The best File Explorer alternative on Windows 11 just got better at handling large folders]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000047"><div><span class="fs12lh1-5"><b>What you need to know</b></span></div><div><span class="fs12lh1-5"><b><br></b></span></div><div><ul><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Files is a popular third-party alternative to the built-in File Explorer on Windows.</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">The Files app just received an update that brings it to version 3.2.</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">The update adds a list view layout, adds the option to edit album covers through properties, and adds support for higher quality thumbnails.</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Several fixes and general improvements ship with the update as well.</span></li></ul><slot name="default"></slot></div><div><span class="fs12lh1-5">Popular third-party file explorer Files just received an update that fixes one of its most frustrating issues. Following an update to version 3.2, the Files app should be more stable when navigating through large folders. The update also adds a list view for files and folders, brings the option to change album covers on media files, and adds support for higher quality thumbnails.<slot name="cont-read-break"></slot></span></div><div data-t="{&quot;n&quot;:&quot;intraArticle&quot;,&quot;t&quot;:13}"><slot name="BB1iaBTI-intraArticleModule-0"></slot></div><div><span class="fs12lh1-5">Files is not made by Microsoft, so it's not <em>the </em>File Explorer. But it is <em>a </em>file explorer. It's one of the more popular File Explorer alternatives available on Windows 11 and Windows 10. Many of the design elements and features seen in the Files app are highly requested features for the default File Explorer on Windows.</span></div><div class="imTACenter"><span class="fs12lh1-5"><img class="image-0" src="http://asianheritagesociety.org/images/BB1iaxAq.jpg"  title="" alt="" width="576" height="576" /><br></span></div><div class="imTACenter"><div style="text-align: start;"><span class="fs12lh1-5"><strong><span class="cf1">Files App |</span><span class="cf1"> </span></strong><span class="cf1"><strong>$8.99 at Microsoft Store</strong></span></span></div><div style="text-align: start;"><span class="fs12lh1-5"><span class="cf1">This third-party file explorer has many features people have requested for years from the built-in File Explorer on Windows. It has tabs, a column view, a file preview, and a customizable interface.</span><span class="cf1">View Deal</span></span></div><div class="imTALeft"><span class="fs12lh1-5">Replacing File Explorer on Windows</span></div><div class="imTALeft"><span class="fs12lh1-5"><img class="image-1" src="http://asianheritagesociety.org/images/BB1iaxAv.jpg"  title="" alt="" width="768" height="490" /><br></span></div><div class="imTALeft"><div><span class="fs12lh1-5">I've followed the development of the Files app for years, dating all the way back to when it was called Files UWP. I speak with its developer regularly and have used various beta versions of the app over the years. I love the design of Files and several of its features.</span></div><div><span class="fs12lh1-5">When I see people discuss the Files app, they usually laud its design and feature set. But performance of the app can vary from system to system. I've had good luck with the app on some computers and run into slower performance and stabilities issues on other PCs. The app has trended in the right direction at a good pace in my experience, and I like the changes seen in version 3.2.</span></div><div data-t="{&quot;n&quot;:&quot;intraArticle&quot;,&quot;t&quot;:13}"><slot name="BB1iaBTI-intraArticleModule-1"></slot></div><div><ul><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5"><strong><span class="cf1">Related:</span><span class="cf1"> </span></strong><span class="cf1"><strong>How to get started with the Files app in Windows 11</strong></span></span></li></ul><slot name="default"></slot></div><div><span class="fs12lh1-5">I don't think Files is ready to completely replace File Explorer on Windows, but it can be a powerful and useful companion app. Files has unique features that make it more than a better-looking clone of File Explorer. For example, its tagging system is excellent and will be familiar to those who use macOS.</span></div><div><span class="fs12lh1-5">I also like the fact that a third-party file management app frequently delivers features before Microsoft's File Explorer. Apps like this can push development of first-party apps forward, which is good for all Windows users.</span></div><div><span class="fs12lh1-5">The developer of Files shared a <span class="cf2">change log</span> of all the changes seen in version 3.2:</span></div><div><span class="fs12lh1-5">What's new in Files</span></div><div><ul><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">[Reduced] crashes when browsing large folders and when adding and deleting items.</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">List View is a new layout option that lets you display more items while taking up less space. It only shows the icons and file names of your items, without any extra details.</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">You can now change the album covers on media files directly from Files.</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">We’ve enhanced the resolution and contrast of our thumbnail previews to make them more visually appealing and easy to identify.</span></li></ul><slot name="default"></slot></div><div><span class="fs12lh1-5">Files 3.2 changes and improvements</span></div><div><ul><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Added options to hide the built-in items from the right click context menu</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Added an option to disable auto scroll when navigating up the file tree</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Updated the search query to include unindexed items by default</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Creating a new file now adds it to the Recent Files list</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Creating a shortcut will now use the naming preferences from File Explorer</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Clicking a tag in the Details Pane will now start a search for other tagged items</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Added support for setting jfif files as the desktop &amp; lockscreen background</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Improved the performance when launching Files in the background at Windows startup</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Improved support for high contrast themes</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Updated the cloud status icon in the Columns View</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Added support for pinning executable shortcuts to the Start Menu</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where updating the default layout wouldn’t refresh open tabs</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where renaming a tag wouldn’t save the new name</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where certain changes in the Properties Window couldn’t be canceled</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where switching from Details to Tiles would sometimes result in blurry icons</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where thumbnails would sometimes fail to load for OneDrive items</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where folder thumbnails wouldn’t display a preview of the contents</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where the Properties window was missing its icon</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where search results would sometimes use the Columns View</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where opening tags from the sidebar would default to the Details View</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where renaming items on a search page wouldn’t update the file list</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where the privacy policy link was broken</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where OneDrive files would automatically download</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where pinned applications were executed in %windir%\System32</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where smart extraction didn’t work correctly for a single folder</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where the path bar didn’t use localized name for system folders</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where directly opening a library would invoke explorer.exe</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed environment variables expansion for shortcuts</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where folders sizes weren’t calculated when opening Properties from the sidebar</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where modified date was missing from the Properties window</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where it didn’t work to target files when creating new shortcuts</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where exiting from the system tray icon didn’t save the open tabs</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where a new tab would open when trying to open a new window</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where batch files couldn’t be previewed inside archives</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where installing multiple fonts would trigger multiple UAC prompts</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed issue where refocusing Details View would sometimes scroll</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed crash that would occur when displaying a large number of items at the same time</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed crash that would occur when items were added from an external app</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed crash that would occur when opening Properties for certain items in the Recent Files list</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed crash that would occur when the app failed to update</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed crash that would occur when renaming items in the Grid View layout</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed crash that would occur when selecting the address bar via Shift + Tab</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed crash that would occur when Git path contained an emoji</span></li><li data-t="{&quot;n&quot;:&quot;blueLinks&quot;}"><span class="fs12lh1-5 cf1">Fixed crash that could occur when dragging in grouped grid layout</span></li></ul></div></div></div></div>]]></description>
			<pubDate>Mon, 12 Feb 2024 22:44:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/BB1iaxAv_thumb.jpg" length="65536" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?the-best-file-explorer-alternative-on-windows-11-just-got-better-at-handling-large-folders</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000047</guid>
		</item>
		<item>
			<title><![CDATA[Microsoft CEO Satya Nadella Aims to Empower India Through AI Skills Training]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000044"><div><span class="fs12lh1-5 cf1"><b>Microsoft CEO Satya Nadella Aims to Empower India Through AI Skills Training</b></span></div><div><span class="fs12lh1-5 cf1">In a bold and forward-thinking move, Microsoft CEO Satya Nadella has unveiled an ambitious plan to train 2 million people in India with essential AI skills. This initiative reflects Nadella's commitment to leveraging technology as a force for positive change and economic empowerment.</span></div><div><span class="fs12lh1-5 cf1"><b><br></b></span></div><div><span class="fs12lh1-5 cf1"><b>Empowering India's Workforce</b></span></div><div><span class="fs12lh1-5 cf1">With technology rapidly transforming industries worldwide, acquiring AI skills has become increasingly crucial for individuals seeking to remain competitive in the job market. Recognizing this trend, Nadella aims to democratize access to AI education and training in India, thereby equipping millions with the knowledge and expertise needed to thrive in the digital economy.</span></div><div><span class="fs12lh1-5 cf1"><b><br></b></span></div><div><span class="fs12lh1-5 cf1"><b>Addressing Skills Shortages</b></span></div><div><span class="fs12lh1-5 cf1">India, with its burgeoning population and dynamic economy, faces a pressing need for skilled workers in the field of artificial intelligence. By providing comprehensive training programs, Microsoft seeks to bridge the gap between demand and supply, empowering individuals from diverse backgrounds to pursue careers in AI-related fields.</span></div><div><span class="fs12lh1-5 cf1"><br></span></div><div><span class="fs12lh1-5 cf1"><b>Collaboration and Partnership</b></span></div><div><span class="fs12lh1-5 cf1">Nadella's vision for AI skills training in India involves collaboration with a wide range of stakeholders, including government agencies, educational institutions, and industry partners. By harnessing the collective expertise and resources of these stakeholders, Microsoft aims to create a scalable and sustainable framework for delivering high-quality AI education across the country.</span></div><div><span class="fs12lh1-5 cf1"><br></span></div><div><span class="fs12lh1-5 cf1"><b>Driving Innovation and Economic Growth</b></span></div><div><span class="fs12lh1-5 cf1">Beyond addressing immediate skills shortages, Nadella's initiative holds the potential to catalyze innovation and drive economic growth in India. By fostering a thriving ecosystem of AI talent, Microsoft aims to unlock new opportunities for entrepreneurship, research, and development, positioning India as a global hub for AI innovation.</span></div><div><span class="fs12lh1-5 cf1"><br></span></div><div><span class="fs12lh1-5 cf1"><b>Democratizing Access to Opportunity</b></span></div><div><span class="fs12lh1-5 cf1">At its core, Nadella's plan to train 2 million people in India with AI skills is about democratizing access to opportunity. By empowering individuals with the tools and knowledge they need to succeed in the digital age, Microsoft is helping to create a more inclusive and equitable society, where everyone has the chance to realize their full potential.</span></div><div><span class="fs12lh1-5 cf1"><br></span></div><div><span class="fs12lh1-5 cf1"><b>Conclusion</b></span></div><div><span class="fs12lh1-5 cf1">As Microsoft CEO Satya Nadella embarks on this ambitious endeavor to train 2 million people in India with AI skills, he is not only shaping the future of technology but also empowering individuals to shape their own futures. By investing in education, collaboration, and innovation, Nadella and Microsoft are laying the groundwork for a brighter, more prosperous India, where the benefits of AI are accessible to all.</span></div><div><span class="fs12lh1-5 cf1"><br></span></div><div><form><div></div></form></div></div>]]></description>
			<pubDate>Sun, 11 Feb 2024 21:49:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/BB1hUMuZ_thumb.jpg" length="32768" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?microsoft-ceo-satya-nadella-aims-to-empower-india-through-ai-skills-training</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000044</guid>
		</item>
		<item>
			<title><![CDATA[Dusty introduces a new version of its construction layout robot]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000002B"><div>When it launched in 2019, Dusty’s FieldPrinter added a clever new dimension to the world of construction automation. The autonomous mobile robot brought blueprints into the real world by drawing chalk outlines on the site’s floors, thus eliminating much of the guess work in a job you really don’t want to guess at. The company says the first-gen ‘bot has so far printed more than 91 million square feet.</div><div><br></div><div>Today the Bay Area-based startup is launching the sequel. As was the case with its predecessor, FieldPrinter 2 sports a big pair of friendly eyes — personification is a surprisingly effective way to integrate automation into the workplace. The little robot is smaller than the first gen, thus allowing it to better move around obstacles.</div><div><br></div><div>It now prints closer to edges and can “shadow print” behind columns. The 23-pound robot sports a wider print head and a bevy of on-board sensors for improved navigation. It can also be controlled via iPad.</div><div><br></div><div><img class="image-0" src="http://asianheritagesociety.org/images/dusty.jpg"  title="" alt="" width="880" height="584" /><br></div><div><br></div><div>Today’s news also marks the arrival of FieldPrint Platform, which is centered around BIM-to-field — that’s effectively bringing digital information into real-world construction sites.</div><div><br></div><div>“Our new FieldPrint Platform supports the seamless flow of data from the design phase, to the field, and back to the trailer,” cofounder and CEO Tessa Lau notes. “More than just a robot, Dusty provides an integrated software+hardware solution that architects, designers, and field operators utilize to get unparalleled accuracy, communication, and efficiency.”</div><div><br></div><div>Construction is currently shaping up to be one of robotics’ biggest categories. It is, after all, a $2 trillion industry in the U.S. alone. There are several aspects of the building process that are perfectly positioned for automation, especially during an era of staffing shortage. Predictably, Dusty’s innovative solution now has some competition, including, most notably, HP’s SitePrint.</div><div><br></div><div><div class="imTACenter"><iframe width="560" height="315" src="https://www.youtube.com/embed/-qiap7KThW8?si=8yIUIG0MR4Zky4ct" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></div></div></div>]]></description>
			<pubDate>Wed, 24 Jan 2024 04:52:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/Dusty_FP2_6MB_thumb.jpg" length="262846" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?dusty-introduces-a-new-version-of-its-construction-layout-robot</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000002B</guid>
		</item>
		<item>
			<title><![CDATA[Google Chrome gains AI features, including a writing helper, theme creator, and tab organizer]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000029"><div>Google’s Chrome web browser is getting an infusion of AI technology in the latest release. The company announced today it’s soon adding a trio of new AI-powered features to Chrome for Mac and Windows, including a way to smartly organize your tabs, customize your theme, and get help when writing things on the web — like forum posts, online reviews, and more.</div><div>The latter is similar to a feature already available to Google’s experimental AI search experience, SGE (Search Generative Experience), which allows users to get help drafting things like emails in different tones, like more formal or more casual, or in different lengths.</div><div><br></div><div>With the built-in writing helper in Chrome, Google suggests users could write business reviews, “craft a friendly RSVP to a party,” or make a more formal inquiry about a vacation rental, among other things, including writing posts in public spaces, like online forum sites.</div><div><br></div><div><div>The still-experimental feature will be accessible in next month’s Chrome release by right-clicking on a text box or field on the web and then choosing “help me write.” To get started, you’ll first write a few words and then Google’s AI will jump in to help.</div><div><br></div><div>In addition to the writing assistant, AI can also be used to help organize tab groups and personalize your browser.</div><div>Chrome’s Tab Groups feature allows users who keep many tabs open to manage them by organizing them into groups. However, curating them can be a manual process, the company explains. With the new Tab Organizer, Chrome will automatically suggest and create groups based on the tabs you already have open. This feature will be available by right-clicking on a tab and selecting “Organize Similar Tabs.” Chrome will also suggest names and emojis for the tab groups it creates to make them easier to find. This feature is intended to assist when users are online shopping, researching, trip planning, or doing other tasks that tend to leave a lot of open tabs.</div></div><div><br></div><div><img class="image-0" src="http://asianheritagesociety.org/images/chrome-extension.jpg"  title="" alt="" width="880" height="587" /><br></div><div><div><span class="fs14lh1-5 cf1 ff1"><br></span></div><div><span class="fs14lh1-5 cf1 ff1">A final addition mirrors the</span><span class="fs14lh1-5 ff1"> new</span><span class="fs14lh1-5 ff1"> </span><span class="fs14lh1-5 ff1">generative AI wallpaper experience</span><span class="fs14lh1-5 ff1"> </span><span class="fs14lh1-5 ff1">that rec</span><span class="fs14lh1-5 cf1 ff1">ently arrived on Android 14 and Pixel devices. Now Google will use the same text-to-image diffusion model to allow users to generate custom themes for their Chrome browser. The feature allows you to generate these themes by subject, mood, visual style, and color by selecting the new “Create with AI” option after opening the “Customize Chrome” side panel and clicking “Change theme.” Before, Chrome offered a variety of colorful but simple themes to choose from alongside those fro</span><span class="fs14lh1-5 ff1">m</span><span class="fs14lh1-5 ff1"> </span><span class="fs14lh1-5 ff1">artists</span><span class="fs14lh1-5 ff1">, but</span><span class="fs14lh1-5 cf1 ff1"> this feature will allow users to expand beyond the built-in choices to create a theme that better matches their own current vibe.</span></div></div><div><span class="fs14lh1-5 cf1 ff1"><br></span></div><div><div>Though a busy theme could be distracting, the feature at least allows users who don’t have an Android phone to test-drive Google’s generative AI for personalization, even if they end up returning to a more basic theme for day-to-day use.</div><div>While the drafting feature won’t arrive until next month’s Chrome release, Google says that the other features, like the tab organizer and AI theme creator, will roll out over the next few days in the U.S. on both Mac and Windows with the current Chrome release (M121). To access these features, you’ll sign into Chrome, select “Settings” from the three-dot menu, and then navigate to the “Experimental AI” page. Because the features are experimental, they won’t ship to enterprise and educational customers at this time, the company notes.</div><div><br></div><div>The features join other AI-powered and machine learning (ML) tools already available in Chrome, like its ability to caption audio and video, protect users from malicious sites via Android’s Safe Browsing feature in Chrome, silence permission prompts, and summarize web pages via the “SGE while browsing” feature.</div><div><br></div><div>Google says that Chrome will be updated with more AI and ML features in the coming year, including through integrations with its new AI model, Gemini, which will be used to help make web browsing easier.</div></div></div>]]></description>
			<pubDate>Wed, 24 Jan 2024 02:44:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/Chrome_thumb.jpg" length="185820" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?google-chrome-gains-ai-features,-including-a-writing-helper,-theme-creator,-and-tab-organizer</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000029</guid>
		</item>
		<item>
			<title><![CDATA[Google’s new Gemini-powered conversational tool helps advertisers quickly build Search campaigns]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000028"><div>Google announced today that Gemini, its family of multimodal large language models, now powers the conversational experience within the Google Ads platform. With this new update, it will be easier for advertisers to quickly build and scale Search ad campaigns.</div><div><br></div><div>The conversational experience is designed to help build Search campaigns through a chat-based tool. The tool uses your website URL to create Search campaigns by generating relevant ad content, including assets and keywords. It suggests images tailored to your campaign using generative AI and images from your website. Google notes that that all of the images created with generative AI will be identified as such.</div><div><br></div><div>Advertisers approve the images and text before the campaign goes live.</div><div><br></div><div>Beta access to the conversational experience in Google Ads is now available to all English language advertisers in the U.S. and U.K. Access will start opening up globally to all English language advertisers over the next few weeks. Google plans to open up access in additional languages in the upcoming months.</div><div><br></div><div>“Over the last few months, we’ve been testing the conversational experience with a small group of advertisers,” wrote Shashi Thakur, Google’s VP and GM of Google Ads, in a blog post. “We observed that it helps them build higher quality Search campaigns with less effort.”</div><div><br></div><div>The new tool will join Google’s other AI-powered tools for advertisers. A few months ago, Google introduced a suite of generative AI product imagery tools for advertisers in the U.S. called “Product Studio.” The tools allow merchants and advertisers to use text-to-image AI capabilities to create new product imagery for free by typing in a prompt describing what they would like to see. The tools also allow advertisers to improve low-quality images and remove distracting backgrounds.</div><div>Today’s announcement comes as Google has been pushing to integrate AI across its products. For instance, the company revealed today that it’s adding three new AI-powered features to Chrome, including a way to organize your tabs, customize your theme, and get help when writing things like online reviews or forum posts on the web.</div></div>]]></description>
			<pubDate>Wed, 24 Jan 2024 02:24:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/Google-HQ_thumb.jpg" length="82394" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?google-s-new-gemini-powered-conversational-tool-helps-advertisers-quickly-build-search-campaigns</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000028</guid>
		</item>
		<item>
			<title><![CDATA[AI startups’ margin profile could ding their long-term worth]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000027"><div><b>The expectation that</b> modern AI tech will find a home in every part of our lives is pandemic. Fittingly, startups and investors are working overtime to build and fund new technology companies to either create or implement new AI tech. Major rounds are often in the headlines, and startups are building at breakneck speeds to stay ahead of both the technology curve and the largest tech companies that have their own AI strategies.</div><div><br></div><div>But despite all the enthusiasm, there’s a niggling detail that deserves our attention: AI startups often have worse economics than most software startups.</div><div><br></div><div>The fact that Anthropic, a leading AI startup that has raised billions of dollars, reportedly had gross margins of 50% to 55% last December underscores the costs of building and running modern AI models, and hints that AI-focused startups have a different valuation profile due to the sheer expense of all that computing power.</div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">Revenue quality is partially predicated on gross margins — revenue less costs of goods sold — and the better those margins, the better the revenue, all else held equal. Startups have long depended on revenue quality as an explanation for their impressive losses during their scaling years — yes, startups consume lots of cash, but the revenue they generate is pristine in terms of quality, and thus worth quite a lot.</span></div><div><span class="fs12lh1-5"><br></span></div><div>This is, among other reasons, why software companies are frequently valued on a multiple of their revenue instead of their profits. When gross margins are high, strong revenue yields oodles of gross profit. Investors like that. But that’s not a valuation model that you can apply to a company that’s, say, <span class="cf1"><a href="https://get.doordash.com/en-us/blog/average-profit-margin-by-industry" target="_blank" class="imCssLink">selling groceries</a></span>.</div></div>]]></description>
			<pubDate>Wed, 24 Jan 2024 01:55:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/12222_thumb.jpg" length="656926" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?ai-startups--margin-profile-could-ding-their-long-term-worth</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000027</guid>
		</item>
		<item>
			<title><![CDATA[Kin.art launches free tool to prevent GenAI models from training on artwork]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000025"><div><span class="fs12lh1-5">It’s a wonder what generative AI, particularly text-to-image AI models like Midjourney and OpenAI’s DALL-E 3, can do. From photorealism to cubism, image-generating models can translate practically any description, short or detailed, into art that might well have emerged from an artist’s easel.</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">The trouble is, many of these models — if not most — were trained on artwork without artists’ knowledge or permission. And while some vendors have begun compensating artists or offering ways to “opt out” of model training, many haven’t.</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">In lieu of guidance from the courts and Congress, entrepreneurs and activists are releasing tools designed to enable artists to modify their artwork so that it can’t be used in training GenAI models. One such tool, Nightshade — released this week — makes subtle changes to the pixels of an image to trick models into thinking the image depicts something different from what it actually does. Another, <span class="cf1"><a href="https://kin.art/" target="_blank" class="imCssLink">Kin.art</a></span>, uses image segmentation (i.e., concealing parts of artwork) and tag randomization (swapping an art piece’s <span class="cf1"><a href="https://www.creativeforce.io/blog/image-meta-tags-explained-beginner-to-expert-guide" target="_blank" class="imCssLink">image metatags</a></span>) to interfere with the model training process.</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">Launched today, Kin.art’s tool was co-developed by Flor Ronsmans De Vry, who co-founded Kin.art, an art commissions management platform, alongside Mai Akiyoshi and Ben Yu a few months ago.</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">As Ronsmans De Vry explained in an interview, art-generating models are trained on datasets of labeled images to learn the associations between written concepts and images, like how the word “bird” can refer to not only bluebirds but also parakeets and bald eagles (in addition to more abstract notions). By “disrupting” either the image or the labels associated with a given piece of art, it becomes that much harder for vendors to use the artwork in model training, he says. </span></div><div><span class="fs12lh1-5"><br></span></div><div><div><span class="fs12lh1-5 cf2">“Designing a landscape where traditional art and generative art can coexist has become one of the major challenges the art industry faces,” Ronsmans De Vry told TechCrunch via email. “We believe this starts from an ethical approach to AI training, where the rights of artists are respected.”</span></div></div><div><span class="fs12lh1-5 cf2"><br></span></div><div><div><span class="fs12lh1-5">“Other tools out there to help protect against AI training try to mitigate the damage after your artwork has already been included in the dataset by poisoning,” Ronsmans De Vry said. “We prevent your artwork from being inserted in the first place.”</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">Now, Kin.art has a product to sell. While the tool is free, artists have to upload their artwork to Kin.art’s portfolio platform in order to use it. The idea at present, no doubt, is that the tool will funnel artists toward Kin.art’s range of fee-based art commission-finding and -facilitating services, its bread-and-butter business.</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">But Ronsmans De Vry is positioning the effort as mostly philanthropic, pledging that Kin.art will make the tool available for third parties in the future.</span></div><div><span class="fs12lh1-5"><br></span></div><div><span class="fs12lh1-5">“After battle-testing our solution on our own platform, we plan to offer it as a service to allow any small website and big platform to easily protect their data from unlicensed use,” he said. “Owning and being able to defend your platform’s data in the age of AI is more important than ever . . . Some platforms are fortunate enough to be able to gate their data by blocking non-users from accessing it, but others need to provide public-facing services and don’t have this luxury. This is where solutions like ours come in.”</span></div></div></div>]]></description>
			<pubDate>Tue, 23 Jan 2024 23:56:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/12334_thumb.jpg" length="290477" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?kin-art-launches-free-tool-to-prevent-genai-models-from-training-on-artwork</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000025</guid>
		</item>
		<item>
			<title><![CDATA[Artisse AI raises $6.7M for its ‘more realistic’ AI photography app]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000024"><div><span class="cf1">Artisse</span>, one of the many AI photo creation apps that let users generate photos of themselves using uploaded selfies combined with prompts, has raised $6.7 million in seed funding, following AI selfies’ viral moment. Similar to others on the market, Artisse users upload photos of themselves to train its AI on their images, then use a text or image prompt to generate new photos of themselves across various settings, postures, and styles. But unlike the competition, Artisse is focused on making its images more photo-realistic, so they could stand in for professional photography if required.</div><div><br></div><div>Under the hood, Artisse is using its own propriety model, alongside best practices and elements from open source models and tools. Because of the hyper-realistic images the app produces, Artisse became a top photo app on the Google Play Store at various times across markets including the U.S., U.K., Japan, South Korea, Canada, and Australia.</div><div><br></div><div>The app has been downloaded over 200,000 times to date, and its images reached an estimated 43 million people across social media, the company claims. To date, users have created over 5 million photos, its website notes.</div><div><br></div><div><div>Originally bootstrapped, Artisse was founded by William Wu, who previously worked in investment and strategy with roles at McKinsey & Co. and Oaktree Capital. The founder <span class="cf1">told TechCrunch back in September</span> that he was inspired to build an AI app after seeing how many people had “perfect” photos uploaded to their Instagram or dating profiles. However, he realized that to have those results, you’d need time and expertise with personal photography. His idea was to make that same type of photography more accessible to anyone with a smartphone.</div><div><br></div><div>Training Artisse therefore takes longer than competitors — around 30 to 40 minutes, though the AI images take minutes to produce. Wu said this system allows it to produce more realistic images than some others on the market, as a result.</div><div>This is what Wu hopes will be Artisse’s differentiating feature.</div></div><div><br></div><div><div>“Midjourny clearly does well when it comes to landscapes and design work, but when it comes to people — the way to think about it, is there’s a lot of different factors and you need to build individual training sets for each of those factors.”</div><div><br></div><div>That means Artisse’s model takes into consideration factors like race, facial structures, skin color, lighting, camera type, camera angle, the way the body is shot, the scenery, and more.</div><div><br></div><div>Plus, adds Wu, “There’s an incredible amount of work that’s required in terms of data collection, data tagging, knowing what makes a good camera photo versus not.”</div><div><br></div><div>Artisse’s AI was trained on public domain photography, Wu notes.</div><div><br></div><div>“A lot of this is not about volume, it’s actually a lot about the quality of the image,” he says.</div><div><br></div><div>Like many apps in this space, Artisse has to overcome struggles in areas like the diversity of body shapes and skin tones, especially if users upload a reference photo where the person in the image is thinner. Another viral app Remini <span class="cf1">faced complaints</span> in this area from women who said the app made them skinnier or with larger chests.</div><div><br></div><div>Artisse aims to stand out from apps like Remini and Lensa by producing photos that could be used in real life.</div><div>However, the startup’s AI model is flexible enough that users could do things with their photos that wouldn’t be appropriate, like changing their race, for example.</div><div><br></div><div>But Wu says he’s not encouraging that nor is this how people are generally using the product.</div></div><div><br></div><div><img class="image-0" src="http://asianheritagesociety.org/images/Screenshot-2023-09-05-at-9.57.31-AM.jpg"  title="" alt="" width="768" height="285" /><br></div><div><div><br></div><div>Instead, Artisse’s users tend to leverage the app to post photos of themselves on social media — particularly those they wouldn’t be able to capture otherwise — like shots where they’re posed next to a fancy car or wearing some high-fashion look. Models and influencers are among Artisse’s early adopters along with some businesses using AI photography for their ads.</div><div><br></div><div>The app initially monetized by offering 25 photos for free, then charging around 20 cents per photo afterward. That attracted a casual audience who dabbled with the tech — Artisse said around 60-70% of users have been “light” users who try out the app one time. Of the 200K downloads, around 4,000 have converted to subscribers, which is the app’s new monetization model.</div><div><br></div><div>There are currently three tiers, priced at $7, $15, and $40 per month, where you receive anywhere from 25 to 370 photos.</div><div>Artisse claims to have tripled revenue to $1 million ARR in December 2023 and is on track to $2.5 million ARR as of this month.</div><div><br></div><div>“Revenue is growing pretty fast, payback period is relatively low,” Wu notes. “I see AI photography as a new category that should be probably of a similar size, if not bigger than, photo editing apps,” he says.</div><div><br></div><div>The startup’s $6.7 million seed funding round was led by The London Fund, a firm that makes strategic investments in high-growth companies with several consumer businesses in their portfolio.</div><div><br></div><div>The investment, which was inbound, made sense because the fund has an influencer marketing arm and could help with marketing the app, Wu explains. The round is still open to others.</div><div><br></div><div>Going forward, the 22-person team is looking to leverage its AI tech in other ways beyond consumer photos. It’s currently exploring virtual fitting room tech for online shopping, where you can model clothes on yourself in different fits and poses, as well as a group photo feature that could one day let you “pose” with a friend or even celebrity you’re a fan of (with permission). Shopping from AI photos and turning them into physical prints are other ideas being explored.</div><div><br></div><div>Artisse’s AI app is available on both <span class="cf1">iOS</span> and <span class="cf1">Android</span>.</div></div><div><div><iframe width="560" height="315" src="https://www.youtube.com/embed/jYJl7ILkdvE?si=k7ImQHjfxma07-tD" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></div></div></div>]]></description>
			<pubDate>Tue, 23 Jan 2024 23:21:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/1132_thumb.jpg" length="300335" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?artisse-ai-raises--6-7m-for-its--more-realistic--ai-photography-app</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000024</guid>
		</item>
		<item>
			<title><![CDATA[I spent the morning with the Apple Vision Pro]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000026"><div><b>Update: The Vision</b> Pro is now available for preorder on Apple’s site for $3,500. Apple has also offered a better look with a <a href="https://www.youtube.com/watch?v=Vb0dG-2huJE" onclick="return x5engine.imShowBox({ media:[{type: 'youtube', url: 'https://www.youtube.com/watch?v=Vb0dG-2huJE', width: 1920, height: 1080, text: '', 'showVideoControls': true }]}, 0, this);" class="imCssLink"><span class="cf1">10-minute “guided tour”</span> </a>of the device. It’s set to start shipping February 2.</div><div><br></div><div>“Avatar” first arrived in theaters in 2009. It was a technological marvel that gave audiences one of the most immersive filmgoing experiences in the medium’s history. If contemporary online forums are to be believed, it also gave some theatergoers something entirely unexpected: depression.</div><div><br></div><div>Not long after the film’s release, CNN reported on a strange new phenomenon that some deemed “Avatar Depression.” The film had been so immersive, a handful of audience members reported experiencing a kind of emptiness when they left the theater, and Pandora along with it.</div><div><br></div><div>As extended reality experiences have grown more prevalent on the success of headsets from companies like Meta, HTC and Sony, many have experienced similar phenomena. The more immersive the experience, the more jarring it can feel when you finally take the headset off.</div><div><br></div><div>After all, at their heart, these sorts of headsets are about effectively tricking one’s brain into what it’s seeing. That cognitive dissonance is also creating the motion sickness some experience in VR. Your body and brain are, for all intents and purposes, experiencing different realities.</div><div><img class="image-1" src="http://asianheritagesociety.org/images/1.jpg"  title="" alt="" width="880" height="495" /><br></div><div><div>The Vision Pro isn’t a virtual reality headset — at least not as far as Apple is concerned. If you follow the company’s press materials, it’s a spatial computing device. In practical terms, it’s mixed reality. While many or most applications are thus far experienced as augmented reality, by way of on-board passthrough technology, the device is also capable of going fully immersive with a quick twist of the Apple Watch–style digital crown that sits atop the visor.</div><div><br></div><div>This week, Apple is giving select members of the media Vision Pro demos. I spent some time with the headset earlier today. It was my first hands-on experience with the hardware, as Matthew had the honor when it was unveiled at WWDC over the summer. The idea was to walk through as many elements as possible in roughly 60 minutes, from the initial app face scans, to the spatial desktop, to movie watching (no gaming, this time out, sadly).</div></div><div><br></div><div><div>The company was careful to provide both ends of the Vision Pro immersion spectrum, ranging from full passthrough to Environments, an all-encompassing nature scene that’s a bit like walking into a live photo on infinite loop. An hour spent cycling through different apps probably isn’t enough to experience full-on “Avatar” depression (certainly not in my case), but it does afford a glimpse at a world where such phenomena are a distinct possibility, especially as display resolutions are capable of rendering increasingly life-like images.</div><div><br></div><div>In the case of the Vision Pro, the screen is everything. As handsets have arrived at a point where 4K and 120Hz refresh rates are no longer novelties, headsets have taken up the mantle. Much of the Vision’s ability to do what it does is dependent on the pair of micro-LEDs, which jam 23 million pixels in per eye. That has the effect of creating an extremely dense 4K display out in front.</div><div><br></div><div>Of course, this is Apple, so every aspect of the hardware is painstakingly considered. That begins with the fitting process. Starting February 2, Apple will have Geniuses on hand at all of its U.S. stores to guide buyers through the process. The exact nature of the in-store experience hasn’t been outlined, but a portion of the floor will be devoted to this, rather than performing it all within the confines of the Genius Bar.</div><div><br></div><div>Of course, not everyone lives near an Apple Store. As such, the company will also make the process available via the app. In fact, the at-home version relies on the same app employees will be using in-store. The first step is almost indistinguishable from the process of setting up Face ID on an iPhone. You hold the phone up near your face and then move your phone around in a circle as it takes a scan from different angles. You’ll do this twice.</div><div><br></div><div><img class="image-3" src="http://asianheritagesociety.org/images/aaa.jpg"  title="" alt="" width="880" height="421" /><br></div><div><br></div><div>From here, the system will determine which components will best fit with your face shape. All faces are different, of course. There’s a massive range, and getting the wrong component could dramatically impact the experience. We ran into some issues with my face (not the first time those words have been uttered). The Light Seal, which magnetically attaches to the headset, is designed to keep ambient light from leaking in.</div><div><br></div><div>I just couldn’t get it quite right. We ultimately ran out of time and I had to soldier on with light pouring in from the nose bridge and my cheekbones. If you’ve ever had a similar experience with a headset, you know it’s an annoyance at first, but your brain ultimately adjusts and you forget it’s there. There were, however, a few dark demos where it once again made itself known.</div><div><br></div><div>I’ve recently read some hands-on write-ups that reported some discomfort after wearing the hardware for a full hour. I didn’t experience this, but your mileage, will, of course, vary. To more comfortably distribute the device’s pound of weight, Apple is including a pair of straps in the box. There’s the Solo Knit Band, which is the big, padded one you see in all the pictures. Apple is also tossing in the Dual Loop, which is narrower and has a secondary band that goes over the top of the head.</div><div>I wore the latter in the demo, assuming that it would do a better job with weight distribution. The straps snap on magnetically and feature Velcro for adjustments. And then, of course, there’s the battery pack. My guess is that Apple designers fought like hell to find a way around it. Ultimately, however, doing so would either mean a dramatic loss of battery life or a lot more weight added to the headset.</div><div><br></div><div>For better or worse, the hardware world is one of compromise. There are, after all, limits to physics. As it stands, the battery pack is a bit of a vestigial organ, and not a particularly elegant one at that. It feels like a very first-gen element to be addressed in subsequent versions.</div><div><br></div><div>It’s long enough that you can run it behind you while you sit, or stuff it in a pocket. I have zero doubt that the coming months will also see a number of solutions from third-party accessory manufacturers, like battery belts that promise an AR element.</div><div><br></div><div>Once you’re up and running, though, you’ll forget it’s there. This itself can ultimately be an issue, if you decided you want to stand, as I did, halfway through the demo. I got a slight jerk from the pack on doing so. Moral of the story, if you plan to do a lot of standing while wearing the headset, find a good spot for the battery.</div><div><br></div><div>The UX is largely gesture based. You’re going to do more pinching than an overzealous prankster on St. Patrick’s Day in this thing. The secret sauce is a combination of eye tracking and pinching. Look at an icon and it will pulse subtly. Now you can pinch to select. Pinch your fingers and swipe left or right to scroll. Pinch your fingers on both hands and pull them apart to zoom. There’s a bit of a learning curve, but you’ll get up and running quickly. I believe in you.</div><div><br></div><div><img class="image-0" src="http://asianheritagesociety.org/images/c8d15a06-c537-4b52-9fe4-5b459fb7fdd8-cover.png"  title="" alt="" width="880" height="440" /><br></div><div><br></div><div>The hand tracking is very good here. You don’t have to lift your hands (though you probably will, instinctually), just as long as you ensure that they’re not occluded from the line of sight. I largely rested mine on my lap throughout.</div><div>Further refinement can be found through a button and digital crown located on the top of the visor. The crown is really not much more than a bigger version of what you get on the Apple Watch.</div><div><br></div><div>Once up and running, I immediately entered the world of passthrough. This isn’t a new idea. Magic Leap does this, as do new headsets from Meta and HTC. A fully immersive experience requires visor opacity. This means that you can’t simply look through the glass at the world around you. Passthrough utilizes on-board cameras to get an image of your surroundings and beam them to your eyes with as little latency as possible.</div><div><br></div><div>Of course, human beings are quite adept at noticing latency. This is another one of those brain/body things. If the headset effectively tricks your brain into believing it’s looking directly at an image, the smallest perceptible bit of latency will be jarring. There is a small bit here. That’s to be expected. It’s not enough, however, to be truly off-putting. Again, you get used to it. (I’m going to be saying that a lot. Get used to it.)</div><div><br></div><div>You also get used to the passthrough itself. While it’s probably the best version of the technology I’ve experienced, it’s still immediately obvious that you’re not actually looking through a transparent surface. If the headset is a window, it’s a little foggy. The image isn’t as sharp as reality, nor is it as bright. Remember that bit before about getting used to it? That applies again.</div><div><br></div><div>Passthrough is a foundational technology here for a number of reasons. The first and most practical is so you don’t run into shit. Simple enough. The second is that spatial computing element we talked about 1,300 or so words back. The world, to paraphrase Billy Corgan, is a desktop.</div></div><div><br></div><div><div>This is the bit you’ve seen in all the videos. Among those who imagined the Vision Pro as a gaming-first device, it was surprising just how much Apple leaned into this idea of spatial computing. In the grand scheme of things we imagine doing with mixed reality headsets, it’s not one of the sexiest. It’s work. It’s sitting at a desk typing or scrolling the internet. The rub is that there’s no desktop monitor — or rather, reality in your desktop monitor.</div><div><br></div><div><span class="fs12lh1-5">Again, Apple isn’t the first company to try this. It may well, however, be the most ambitious. It’s a great effect. As someone typing this to you while seated at a desk in front of two large monitors, the appeal is clear. Heck, if you read me with any regularity, you know that after decades of going TV free, I recently got a projector. As I was shopping for projector screens, I found the one that best suited my needs also happened to be 100 inches.</span><br></div><div><br></div><div>One hundred inches is — and I can’t express this enough — a lot of inches. I have a smallish one-bedroom apartment. The projector screen now monopolizes an entire wall of it. Using the Vision Pro, it strikes me that Apple has done a truly excellent job approximating distances and points in space.</div><div><br></div><div><img class="image-5" src="http://asianheritagesociety.org/images/bbb.jpg"  title="" alt="" width="880" height="495" /><br></div><div><br></div><div>Watching a movie on Vision Pro feels like watching a movie projected large on the wall in front of you. Utilizing the spatial computing element, meanwhile, really gives the effect of picking up app windows and moving them around in front of you. You can have (more or less) as many open at once as you please, like you would on your desktop or phone. It’s the first computing device I’ve used where real estate doesn’t feel like a premium. Want to open another app? Just toss it to the side.</div></div><div><br></div><div><div>If reality is too boring, flip on the Environments feature we discussed before, and do your taxes atop a Hawaiian volcano at sunset. Apple is also opening up Environments to third parties. Disney made a few, so I spent a bit of time at Avengers HQ and in a parked speeder on Tatooine. It’s a fun reminder of how much of my childhood IP that public-domained mouse currently owns.</div><div><br></div><div>For my money, aside from watching a movie, the most immersive experience today was Encounter Dinosaurs. Apple worked with Jon Favreau and other folks behind the Apple TV+ show Prehistoric Planet to create an impressive dinosaur experience. These projects remind one a bit of some of the first-party app experiences Apple created to show the original iPad’s display.</div><div><br></div><div>Here, a portal opens on the walk, showcasing a craggy prehistoric landscape. A couple of large carnivores reminiscent of the T. rex step through to give you a sniff. It’s very cool and makes you feel like a kid for a moment (never take the headset off and you’ll never have to confront any adult responsibilities). I loved it. The graphics are impressive, the AI makes the dinosaurs respond to the user’s movements and audio pod speakers on either side really bring to life the cacophonous snorts and grunts of a curious carnivore.</div><div><br></div><div>Encounter Dinosaurs isn’t a foundational selling point, but it’s a great sign post for where things are going. Today’s demo was, unfortunately, wholly devoid of gaming, but the dinosaur experience gave me a good bit of hope about future experiences. Honestly, I could have spent the full hour chilling with dinosaurs and been perfectly happy. That’s probably just me.</div></div><div><br></div><div><div>What may have been the most impressive thing about the demo, however, is that it felt wholly immersive even with passthrough on. It’s a strange sensation feeling transported while being very much grounded in reality.</div><div><img class="image-4" src="http://asianheritagesociety.org/images/1-3.jpg"  title="" alt="" width="880" height="495" /><br></div><div><span class="fs12lh1-5">Another surprisingly immersive moment came while trying the Mindfulness app. It took decades of me banging my head against the wall (metaphorically) to really start to see the benefits of meditation. The Vision Pro, however, feels like a bit of a cheat code. The app centers around a flower petal ring that moves in and out to help you control your breathing (it’s similar to the app of the same name on Apple Watch). It’s very centering and something I absolutely plan to take advantage of if and when we get a test unit.</span><br></div><div><br></div><div>Spatial photos and videos also warrant a mention here. Shot on the iPhone 15 Pro, the images create a 3D scene with a real sense of depth. Remember ViewMaster? Imagine that, only with your photos and videos and you get a rough proximation of the experience. One video was seated at a family table, and felt downright intrusive, as though you’re watching strangers interact in their own kitchen.</div><div><br></div><div>If you turn your head toward a person while in one of these fully immersive experiences, you’ll begin to see their figure come through. The system utilizes people recognition and will not do the same with objects. It’s just another way to help wearers avoid being fully cut off from reality.</div><div><br></div><div>For the people around you, there’s EyeSight (not to be confused with iSight). Remember the scanning process at the beginning? Another thing the app does is build a virtual version of your face. When you look at someone, an image of the top of your face (mostly your eyes) appears in a small virtual cutout on the visor. Cameras inside the headset see when you do something like blink or grimace, and the image responds in real time, with AI creating an approximation of what your face looks like when doing that.</div></div><div><br></div><div><div><span class="cf2">The feature exists to circumvent potential privacy concerns, providing a subtle way for people around you to know when you’re looking at them. The contents on the inside of the screen can also be broadcast to an iOS device via AirPlay, so people around you can follow along with what you see.</span></div><div><span class="cf2"><br></span></div><div><span class="cf2">Preorders for the Vision Pro open this Friday, January 19. The headset hits retail on February 2. Apple has promised more news and content announcements between now and then. As is, it’s an impressive demonstration of a new paradigm for the company — one that took the better part of a decade to develop. It brings together a number of different things the company has been working on over the years, such as spatial audio into a truly compelling package.</span></div><div><span class="cf2"><br></span></div><div><span class="cf2">Is it $3,500 compelling, however? After an hour of testing, I’m not fully convinced. For one thing, that price is prohibitively expensive for a majority of people who would be interested in the system. For another, it feels like we’re very much in the primordial stages of the content story. Much of what is on offer is existing apps ported over. They’re still neat in this setting, but it’s harder to make the case that they’re revolutionary.</span></div><div><span class="cf2"><br></span></div><div><span class="cf2">Taken as a whole, however, the Vision Pro just might be.</span></div><div><br></div></div></div>]]></description>
			<pubDate>Sat, 20 Jan 2024 00:14:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/qq_thumb.jpg" length="204537" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?i-spent-the-morning-with-the-apple-vision-pro</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000026</guid>
		</item>
		<item>
			<title><![CDATA[OpenAI: Copyrighted data ‘impossible’ to avoid for AI training]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000017"><div class="imTAJustify"><span class="cf1">OpenAI</span> made waves this week with its bold assertion to a UK parliamentary committee that it would be “impossible” to develop today’s leading AI systems without using vast amounts of copyrighted data.</div><div class="imTAJustify">The company argued that advanced AI tools like ChatGPT require such broad training that adhering to copyright law would be utterly unworkable.</div><div class="imTAJustify">In written testimony, OpenAI <span class="cf1">stated</span> that between expansive copyright laws and the ubiquity of protected online content, “virtually every sort of human expression” would be off-limits for training data. From news articles to forum comments to digital images, little online content can be utilised freely and legally.</div><div class="imTAJustify">According to OpenAI, attempts to create capable AI while avoiding copyright infringement would fail: “Limiting training data to public domain books and drawings created more than a century ago … would not provide AI systems that meet the needs of today’s citizens.”</div><div class="imTAJustify">While defending its practices as compliant, OpenAI conceded that partnerships and compensation schemes with publishers may be warranted to “support and empower creators.” But the company gave no indication that it intends to dramatically restrict its harvesting of online data, including paywalled journalism and literature.</div><div class="imTAJustify">This stance has opened OpenAI up to multiple lawsuits, including from media outlets like The New York Times <span class="cf1">alleging</span> copyright breaches.</div><div class="imTAJustify">Nonetheless, OpenAI appears unwilling to fundamentally alter its data collection and training processes—given the “impossible” constraints self-imposed copyright limits would bring. The company instead hopes to rely on broad interpretations of fair use allowances to legally leverage vast swathes of copyrighted data.</div><div class="imTAJustify">As advanced AI continues to demonstrate uncanny abilities emulating human expression, legal experts expect vigorous courtroom battles around infringement by systems intrinsically designed to absorb enormous volumes of protected text, media, and other creative output. </div><div class="imTAJustify">For now, OpenAI is betting against copyright maximalists in favour of near-boundless copying to drive ongoing AI development.</div><div class="imTAJustify"><em>(Photo by <span class="cf1">Levart_Photographer</span> on <span class="cf1">Unsplash</span>)</em></div></div>]]></description>
			<pubDate>Thu, 18 Jan 2024 12:51:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/open-ai_thumb.jpg" length="158135" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?openai--copyrighted-data--impossible--to-avoid-for-ai-training</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000017</guid>
		</item>
		<item>
			<title><![CDATA[McAfee unveils AI-powered deepfake audio detection]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000016"><div>McAfee has revealed a pioneering AI-powered deepfake audio detection technology, Project Mockingbird, during CES 2024. This proprietary technology aims to defend consumers against the rising menace of cybercriminals employing fabricated, AI-generated audio for scams, cyberbullying, and manipulation of public figures’ images.</div><div><br></div><div>Generative AI tools have enabled cybercriminals to craft convincing scams, including voice cloning to impersonate family members seeking money or manipulating authentic videos with “cheapfakes.” These tactics manipulate content to deceive individuals, creating a heightened challenge for consumers to discern between real and manipulated information.</div><div><br></div><div>In response to this challenge, McAfee Labs developed an industry-leading AI model, part of the Project Mockingbird technology, to detect AI-generated audio. This technology employs contextual, behavioural, and categorical detection models, achieving an impressive 90 percent accuracy rate.</div><div><br></div><div>Steve Grobman, CTO at McAfee, said: “Much like a weather forecast indicating a 70 percent chance of rain helps you plan your day, our technology equips you with insights to make educated decisions about whether content is what it appears to be.”</div><div><br></div><div>Project Mockingbird offers diverse applications, from countering AI-generated scams to tackling disinformation. By empowering consumers to distinguish between authentic and manipulated content, McAfee aims to protect users from falling victim to fraudulent schemes and ensure a secure digital experience.</div><div><br></div><div>Deep concerns about deepfakes</div><div>As deepfake technology becomes more sophisticated, consumer concerns are on the rise. McAfee’s December 2023 Deepfakes Survey highlights:</div><div><br></div><div>84% of Americans are concerned about deepfake usage in 2024</div><div>68% are more concerned than a year ago</div><div>33% have experienced or witnessed a deepfake scam, with 40% prevalent among 18–34 year-olds</div><div>Top concerns include election influence (52%), undermining public trust in media (48%), impersonation of public figures (49%), proliferation of scams (57%), cyberbullying (44%), and sexually explicit content creation (37%)</div><div>McAfee’s unveiling of Project Mockingbird marks a significant leap in the ongoing battle against AI-generated threats. As countries like the US and UK enter a pivotal election year, it’s crucial that consumers are given the best chance possible at grappling with the pervasive influence of deepfake technology.</div></div>]]></description>
			<pubDate>Thu, 18 Jan 2024 11:59:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/mc_thumb.jpg" length="39024" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?mcafee-unveils-ai-powered-deepfake-audio-detection</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000016</guid>
		</item>
		<item>
			<title><![CDATA[Multiple AI models help robots execute complex plans more transparently]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000015"><div><span class="fs12lh1-5 ff1">Your daily to-</span><span class="fs12lh1-5 ff1">do list is likely pretty straightforward: wash the dishes, buy groceries, and other minutiae. It’s unlikely you wrote out “pick up the first dirty dish,” or “wash that plate with a sponge,” because each of these miniature steps within the chore feels intuitive. While we can routinely complete each step without much thought, a robot requires a complex plan that involves more detailed outlines.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">MIT’s Improbable AI Lab, a group within the Computer Science and Artificial Intelligence Laboratory (CSAIL), has offered these machines a helping hand with a new multimodal framework: Compositional Foundation Models for Hierarchical Planning (HiP), which develops detailed, feasible plans with the expertise of three different foundation models. Like OpenAI’s GPT </span><span class="fs12lh1-5 ff1">4, the foundation model that ChatGPT and Bing Chat were built upon, these foundation models are trained on massive quantities of data for applications like generating images, translating text, and robotics.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Unlike RT2 and other multimodal models that are trained on paired vision, language, and action data, HiP uses three different foundation models each trained on different data modalities. Each foundation model captures a different part of the decision-</span><span class="fs12lh1-5 ff1">making process and then works together when it’s time to make decisions. HiP removes the need for access to paired vision, language, and action data, which is difficult to obtain. HiP also makes the reasoning process more transparent.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">What’s considered a daily chore for a human can be a robot’s “long-</span><span class="fs12lh1-5 ff1">horizon goal” — an overarching objective that involves completing many smaller steps first — requiring sufficient data to plan, understand, and execute objectives. While computer vision researchers have attempted to build monolithic foundation models for this problem, pairing language, visual, and action data is expensive. Instead, HiP represents a different, multimodal recipe: a trio that cheaply incorporates linguistic, physical, and environmental intelligence into a robot.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">“Foundation models do not have to be monolithic,” says NVIDIA AI researcher Jim Fan, who was not involved in the paper. “This work decomposes the complex task of embodied agent planning into three constituent models: a language reasoner, a visual world model, and an action planner. It makes a difficult decision-</span><span class="fs12lh1-5 ff1">making problem more tractable and transparent.”</span></div><div><br></div><div><span class="fs12lh1-5 ff1">The team believes that their system could help these machines accomplish household chores, such as putting away a book or placing a bowl in the dishwasher. Additionally, HiP could assist with multistep construction and manufacturing tasks, like stacking and placing different materials in specific sequences.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Evaluating HiP</span><br></div><div><br></div><div><span class="fs12lh1-5 ff1">The CSAIL team tested HiP’s acuity on three manipulation tasks, outperforming comparable frameworks. The system reasoned by developing intelligent plans that adapt to new information.</span><br></div><div><br></div><div><span class="fs12lh1-5 ff1">First, the researchers requested that it stack different-</span><span class="fs12lh1-5 ff1">colored blocks on each other and then place others nearby. The catch: Some of the correct colors weren’t present, so the robot had to place white blocks in a color bowl to paint them. HiP often adjusted to these changes accurately, especially compared to state-</span><span class="fs12lh1-5 ff1">of-</span><span class="fs12lh1-5 ff1">the-</span><span class="fs12lh1-5 ff1">art task planning systems like Transformer BC and Action Diffuser, by adjusting its plans to stack and place each square as needed.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Another test: arranging objects such as candy and a hammer in a brown box while ignoring other items. Some of the objects it needed to move were dirty, so HiP adjusted its plans to place them in a cleaning box, and then into the brown container. In a third demonstration, the bot was able to ignore unnecessary objects to complete kitchen sub-</span><span class="fs12lh1-5 ff1">goals such as opening a microwave, clearing a kettle out of the way, and turning on a light. Some of the prompted steps had already been completed, so the robot adapted by skipping those directions.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">A three-</span><span class="fs12lh1-5 ff1">pronged hierarchy</span></div><div><br></div><div><span class="fs12lh1-5 ff1">HiP’s three-</span><span class="fs12lh1-5 ff1">pronged planning process operates as a hierarchy, with the ability to pre-</span><span class="fs12lh1-5 ff1">train each of its components on different sets of data, including information outside of robotics. At the bottom of that order is a large language model (LLM), which starts to ideate by capturing all the symbolic information needed and developing an abstract task plan. Applying the common sense knowledge it finds on the internet, the model breaks its objective into sub-</span><span class="fs12lh1-5 ff1">goals. For example, “making a cup of tea” turns into “filling a pot with water,” “boiling the pot,” and the subsequent actions required.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">“All we want to do is take existing pre-</span><span class="fs12lh1-5 ff1">trained models and have them successfully interface with each other,” says Anurag Ajay, a PhD student in the MIT Department of Electrical Engineering and Computer Science (EECS) and a CSAIL affiliate. “Instead of pushing for one model to do everything, we combine multiple ones that leverage different modalities of internet data. When used in tandem, they help with robotic decision-</span><span class="fs12lh1-5 ff1">making and can potentially aid with tasks in homes, factories, and construction sites.”</span></div><div><br></div><div><span class="fs12lh1-5 ff1">These models also need some form of “eyes” to understand the environment they’re operating in and correctly execute each sub-</span><span class="fs12lh1-5 ff1">goal. The team used a large video diffusion model to augment the initial planning completed by the LLM, which collects geometric and physical information about the world from footage on the internet. In turn, the video model generates an observation trajectory plan, refining the LLM’s outline to incorporate new physical knowledge.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">This process, known as iterative refinement, allows HiP to reason about its ideas, taking in feedback at each stage to generate a more practical outline. The flow of feedback is similar to writing an article, where an author may send their draft to an editor, and with those revisions incorporated in, the publisher reviews for any last changes and finalizes.</span><br></div><div><br></div><div><span class="fs12lh1-5 ff1">In this case, the top of the hierarchy is an egocentric action model, or a sequence of first-</span><span class="fs12lh1-5">person images that infer which actions should take place based on its surroundings. During this stage, the observation plan from the video model is mapped over the space visible to the robot, helping the machine decide how to execute each task within the long-</span><span class="fs12lh1-5">horizon goal. If a robot uses HiP to make tea, this means it will have mapped out exactly where the pot, sink, and other key visual elements are, and begin completing each sub-</span><span class="fs12lh1-5">goal.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Still, the multimodal work is limited by the lack of high-</span><span class="fs12lh1-5 ff1">quality video foundation models. Once available, they could interface with HiP’s small-</span><span class="fs12lh1-5 ff1">scale video models to further enhance visual sequence prediction and robot action generation. A higher-</span><span class="fs12lh1-5 ff1">quality version would also reduce the current data requirements of the video models.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">That being said, the CSAIL team’s approach only used a tiny bit of data overall. Moreover, HiP was cheap to train and demonstrated the potential of using readily available foundation models to complete long-</span><span class="fs12lh1-5 ff1">horizon tasks. “What Anurag has demonstrated is proof-</span><span class="fs12lh1-5 ff1">of-</span><span class="fs12lh1-5 ff1">concept of how we can take models trained on separate tasks and data modalities and combine them into models for robotic planning. In the future, HiP could be augmented with pre-</span></div><div><wbr></div><div><span class="fs12lh1-5 ff1">trained models that can process touch and sound to make better plans,” says senior author Pulkit Agrawal, MIT assistant professor in EECS and director of the Improbable AI Lab. The group is also considering applying HiP to solving real-</span><span class="fs12lh1-5 ff1">world long-</span><span class="fs12lh1-5 ff1">horizon tasks in robotics.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Ajay and Agrawal are lead authors on a paper describing the work. They are joined by MIT professors and CSAIL principal investigators Tommi Jaakkola, Joshua Tenenbaum, and Leslie Pack Kaelbling; CSAIL research affiliate and MIT-</span><span class="fs12lh1-5 ff1">IBM AI Lab research manager Akash Srivastava; graduate students Seungwook Han and Yilun Du ’19; former postdoc Abhishek Gupta, who is now assistant professor at University of Washington; and former graduate student Shuang Li PhD ’23.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">The team’s work was supported, in part, by the National Science Foundation, the U.S. Defense Advanced Research Projects Agency, the U.S. Army Research Office, the U.S. Office of Naval Research Multidisciplinary University Research Initiatives, and the MIT-</span><span class="fs12lh1-5 ff1">IBM Watson AI Lab. Their findings were presented at the 2023 Conference on Neural Information Processing Systems (NeurIPS).</span></div></div>]]></description>
			<pubDate>Sun, 14 Jan 2024 06:09:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/BotBuiltTeam-transformed_thumb.jpg" length="232619" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?multiple-ai-models-help-robots-execute-complex-plans-more-transparently</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000015</guid>
		</item>
		<item>
			<title><![CDATA[Congress Wants Tech Companies to Pay Up for AI Training Data]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000014"><div><span class="fs12lh1-5 ff1">At a Senate hearing on AI’s impact on journalism, lawmakers backed media industry calls to make OpenAI and other tech companies pay to license news articles and other data used to train algorithms.</span></div><div><span class="fs12lh1-5 ff1"><br></span></div><div><div><span class="fs12lh1-5 ff1">Do AI companies need to pay for the training data that powers their generative AI systems? The question is hotly contested in Silicon Valley and in a wave of lawsuits levied against tech behemoths like Meta, Google, and OpenAI. In Washington, DC, though, there seems to be a growing consensus that the tech giants need to cough up.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Today, at a Senate hearing on AI’s impact on journalism, lawmakers from both sides of the aisle agreed that OpenAI and others should pay media outlets for using their work in AI projects. “It’s not only morally right,” said Richard Blumenthal, the Democrat who chairs the Judiciary Subcommittee on Privacy, Technology, and the Law that held the hearing. “It’s legally required.”</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Josh Hawley, a Republican working with Blumenthal on AI legislation, agreed. “It shouldn’t be that just because the biggest companies in the world want to gobble up your data, they should be able to do it,” he said.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Media industry leaders at the hearing today described how AI companies were imperiling their industry by using their work without compensation. Curtis LeGeyt, CEO of the National Association of Broadcasters, Danielle Coffey, CEO of the News Media Alliance, and Roger Lynch, CEO of Condé Nast, all spoke in favor of licensing. (WIRED is owned by Condé Nast.)</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Coffey claimed that AI companies “eviscerate the quality content they feed upon,” and Lynch characterized training data scraped without permission as “stolen goods.” Coffey and Lynch also both said that they believe AI companies are infringing on copyright under current law. Lynch urged lawmakers to clarify that using journalistic content without first brokering licensing agreements is not protected by fair use, a legal doctrine that permits copyright violations under certain conditions.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Common Ground</span><br></div><div><span class="fs12lh1-5">Senate hearings can be adversarial, but the mood today was largely congenial. The lawmakers and media industry insiders often applauded each others’ statements. “If Congress could clarify that the use of our content, or other publisher content, for the training and output of AI models is not fair use, then the free market will take care of the rest,” Lynch said at one point. “That seems eminently reasonable to me,” Hawley replied.</span><br></div><div><br></div><div><span class="fs12lh1-5 ff1">Journalism professor Jeff Jarvis was the hearing’s only discordant voice. He asserted that training on data obtained without payment is, indeed, fair use, and spoke against compulsory licensing, arguing that it would damage the information ecosystem rather than safeguard it. “I must say that I am offended to see publishers lobby for protectionist legislation, trading on the political capital earned through journalism,” he said, jabbing at his fellow speakers. (Jarvis was also subject to the hearing’s only real contentious line of questioning, from Republican Marsha Blackburn, who needled Jarvis about whether AI is biased against conservatives and recited an AI-</span><span class="fs12lh1-5">generated poem praising President Biden as evidence.)</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Outside of the committee room, there is less agreement that mandatory licensing is necessary. OpenAI and other AI companies have argued that it’s not viable to license all training data, and some independent AI experts agree.</span></div><div><br></div><div><br></div><div><span class="fs12lh1-5 ff1">“What would that even look like?” asks Sarah Kreps, who directs the Tech Policy Institute at Cornell University. “Requiring licensing data will be impractical, favor the big firms like OpenAI and Microsoft that have the resources to pay for these licenses, and create enormous costs for startup AI firms that could diversify the marketplace and guard against hegemonic domination and potential antitrust behavior of the big firms.”</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Even within circles that favor some form of licensing for AI training data, there’s some dissent about whether it should be legally compulsory rather than simply encouraged as an industry norm. “As a high-</span><span class="fs12lh1-5 ff1">quality and up- </span><span class="fs12lh1-5 ff1">to-</span><span class="fs12lh1-5">date source of information, news media is a valuable source of data for AI companies. My opinion is that they should pay to license it and that it is in their interest to do so,” Northwestern computational journalism professor Nick Diakopoulos says. “But I do not think a mandatory licensing regime is tenable.”</span></div><div><br></div><div><span class="fs12lh1-5 ff1">It remains to be seen exactly how lawmakers plan to fulfill requests, like Lynch’s, to clarify existing copyright law. But there are already several attempts to pass legislation that would create guardrails around data licensing, including the Journalism and Competition Preservation Act, a bill authorizing news outlets to collectively negotiate licensing arrangements, and Blumenthal and Hawley’s</span><span class="fs12lh1-5 ff1"> </span><span class="imUl fs12lh1-5 cf1 ff1">Bipartisan Framework on AI Legislation,</span><span class="fs12lh1-5 ff1"> </span><span class="fs12lh1-5 ff1">which calls for a licensing regime overseen by an independent body.</span><br><br><span class="fs12lh1-5 ff1">As today’s hearing made clear, though, Congress is already highly critical of AI’s potential to amplify the power of the tech industry and its potentially deleterious impacts on journalism. The way Blumenthal described Big Tech’s impact on the local media ecosystem captured this pugilistic tone: “It is literally eating away at the lifeblood of our democracy.”</span></div></div></div>]]></description>
			<pubDate>Sat, 13 Jan 2024 14:41:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/AI-Journalism-Senate-Hearing-Business-1915739103_thumb.jpg" length="216680" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?congress-wants-tech-companies-to-pay-up-for-ai-training-data</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000014</guid>
		</item>
		<item>
			<title><![CDATA[Google, Meta and Tiktok's debts removed from Russian database - bailiffs]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000013"><div><span class="fs12lh1-5 ff1">LONDON, Jan 3 (Reuters) -</span></div><div><span class="fs12lh1-5 ff1"><br></span></div><div><span class="fs12lh1-5 ff1"> </span><span class="fs12lh1-5 ff1">Fines imposed by Russian courts on Alphabet's Google (GOOGL.O) and YouTube, Meta (META.O), TikTok and Telegram appear to have been settled as the companies are no longer registered as debtors in the state bailiffs' database.</span><wbr></div><div><br></div><div><span class="fs12lh1-5 ff1">But the database, accessed by Reuters on Wednesday, still includes X (formerly Twitter) and Twitch, with fines totalling 51 million roubles ($560,730) and 23 million roubles ($252,879), respectively.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Google, Meta, TikTok and Telegram did not immediately respond to requests for comment. State bailiffs could not immediately be reached.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Russia has been at loggerheads with foreign technology companies over what it deems unlawful content and a failure to store user data locally, in simmering disputes that intensified after Russia invaded Ukraine in February 2022.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Following the invasion, Twitter and Meta Platforms' Facebook and Instagram were blocked, and Google-</span></div><div><wbr></div><div><span class="fs12lh1-5 ff1">owned YouTube became a particular target of the Russian state's ire.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">In late 2023, a Russian court imposed a fine against Google of 4.6 billion roubles ($50.4 million), calculated as a proportion of its annual turnover in Russia. Meta, which was labelled as "extremist" in 2022, has also been subjected to fines as a proportion of its Russian revenue.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">($1 = 91.2575 roubles)</span><br></div><div><br></div><div><span class="fs12lh1-5 ff1">Reporting by Reuters; Editing by Mark Trevelyan</span></div></div>]]></description>
			<pubDate>Sat, 13 Jan 2024 12:58:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/goo_thumb.jpg" length="78753" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?google,-meta-and-tiktok-s-debts-removed-from-russian-database---bailiffs</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000013</guid>
		</item>
		<item>
			<title><![CDATA[OpenAI launches GPT Store for custom AI assistants]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000001B"><div class="imTAJustify">OpenAI has launched its new <span class="cf1">GPT Store</span> providing users with access to custom AI assistants.</div><div class="imTAJustify">Since the announcement of custom ‘GPTs’ two months ago, OpenAI says users have already created over three million custom assistants. Builders can now share their creations in the dedicated store.</div><div class="imTAJustify">The store features assistants focused on a wide range of topics including art, research, programming, education, lifestyle, and more. OpenAI is highlighting assistants it deems most useful, including:</div><div class="imTAJustify"><ul><li><span class="fsNaNlh1-5 cf2 ff1">Personal trail recommendations from AllTrails</span></li><li><span class="fsNaNlh1-5 cf2 ff1">Searching academic papers with Consensus</span></li><li><span class="fsNaNlh1-5 cf2 ff1">Expanding coding skills via Khan Academy’s Code Tutor</span></li><li><span class="fsNaNlh1-5 cf2 ff1">Designing presentations with Canva, book recommendations from Books</span></li><li><span class="fsNaNlh1-5 cf2 ff1">Maths help from CK-12 Flexi</span></li></ul></div><div class="imTAJustify">OpenAI says making an assistant is simple and requires no coding knowledge. To share one, builders currently need to make it accessible to ‘Anyone with the link’ and verify their profile.</div><div class="imTAJustify">OpenAI introduced new usage policies and brand guidelines to ensure compliance. A review system combines human and automated checking before assistants are listed. Users can also flag concerning content. &nbsp;</div><div class="imTAJustify">From Q1 2024, OpenAI will pay qualifying US-based builders for user engagement with their assistants. More details on exact payment criteria will be shared closer to launch.</div><div class="imTAJustify">For enterprise users, OpenAI announced ChatGPT Team plans for teams of all sizes. These provide access to a private store section containing company-specific assistants published securely to their workspace.</div><div class="imTAJustify">ChatGPT Enterprise customers will soon get admin controls for internal sharing and selecting which external assistants can be used by employees. As with all ChatGPT Team and Enterprise content, conversations are not used to improve OpenAI’s models.</div><div class="imTAJustify">Few apps have ever achieved the adoption rate of ChatGPT. OpenAI will be hoping its new stores and revenue opportunities will build upon this momentum by incentivising builders to create assistants that provide value to consumers and enterprises alike.</div><div class="imTAJustify"><em>(Image Credit: </em><em><span class="cf1">OpenAI</span></em><em>)</em></div><div class="imTAJustify"><strong><b>See also:</b><b> </b></strong><strong><b><span class="cf1">OpenAI: Copyrighted data ‘impossible’ to avoid for AI training</span></b></strong></div></div>]]></description>
			<pubDate>Thu, 11 Jan 2024 13:33:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/gpt_thumb.jpg" length="289538" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?openai-launches-gpt-store-for-custom-ai-assistants</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000001B</guid>
		</item>
		<item>
			<title><![CDATA[Apple agrees to settle lawsuit over iTunes gift card scam]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000012"><div><span class="fs12lh1-5 ff1">Apple (AAPL.O) has agreed to settle a lawsuit accusing the company of knowingly letting scammers exploit its gift cards, and keep stolen funds for itself.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">According to a filing on Wednesday in federal court in San Jose, California, Apple and the plaintiffs have agreed on material settlement terms after working with a mediator.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">They are drafting a formal settlement to be presented to U.S. District Judge Edward Davila for preliminary approval.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Apple and lawyers for the plaintiffs did not immediately respond to requests for comment.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">The scam involves fraudsters who instill panic or urgency by insisting by phone that victims buy App Store and iTunes gift cards or Apple Store gift cards in order to pay for taxes, hospital and utility bills, bail and debt collection.</span><br></div><div><br></div><div><span class="fs12lh1-5 ff1">Victims are then told to share the codes on the backs of the cards, despite a warning on the cards that reads: "Do not share your code with anyone you do not know."</span></div><div><br></div><div><span class="fs12lh1-5 ff1">According to the complaint, Apple would typically deposit only 70% of the stolen funds into fraudsters' bank accounts, and keep 30% for itself as a "commission" for knowingly converting stolen codes into dollars.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Victims likely lost "hundreds of millions of dollars" in the scam, the complaint said.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">The lawsuit covered anyone in the United States who from 2015 through July 31, 2020 bought gift cards redeemable on iTunes or the App Store, provided codes to fraudsters, and did not receive refunds from Apple.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">In June 2022, Davila rejected Apple's bid to dismiss the lawsuit.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">He said the plaintiffs sufficiently alleged that the Cupertino, California-</span><span class="fs12lh1-5 ff1">based company's effort to disclaim liability, even after victims claimed they were scammed, was unconscionable.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">The case is Barrett et al v Apple Inc et al, U.S. District Court, Northern District of California, No. 20-</span></div><div><span class="fs12lh1-5 ff1">04812.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">(This story has been refiled to correct the day to Wednesday, instead of Tuesday, in paragraph 2)</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Reporting by Jonathan Stempel in New York; Editing by Daniel Wallis</span></div></div>]]></description>
			<pubDate>Thu, 04 Jan 2024 11:53:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/FRCMYCSCVRI3JAD2SZK4F2GCOE_thumb.jpg" length="136642" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?apple-agrees-to-settle-lawsuit-over-itunes-gift-card-scam</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000012</guid>
		</item>
		<item>
			<title><![CDATA[New York Times sues Microsoft and OpenAI in copyright case]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000011"><div><span class="fs12lh1-5 ff1">The New York Times has become the first major US media company to sue OpenAI and Microsoft over their artificial intelligence chatbots, alleging the tech companies have taken a “free-ride” on millions of articles to build the groundbreaking technology.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">The newspaper is seeking unspecified billions of dollars in damages from the two companies for “profit[ing] from the massive copyright infringement, commercial exploitation and misappropriation of The Times’s intellectual property”.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">The move comes as media companies have grown increasingly concerned that generative AI models — which can spew out humanlike text, images and code in seconds — may have been fed their content during their creation without permission or compensation.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">AI groups have said that ingesting and processing vast amounts of information that is available on the open internet constitutes “fair use” under US copyright laws. Publishers fear they will lose traffic, and therefore revenues, as a result of chatbots, such as OpenAI’s hugely popular ChatGPT, summarising their output.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">“Defendants’ unlawful use of The Times’s work to create artificial intelligence products that compete with it threatens The Times’s ability to provide that service” of news, analysis and commentary, its lawsuit, which was filed in New York on Wednesday, alleged.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">The newspaper claims the two tech companies have sought “to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment”.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">OpenAI said: “We respect the rights of content creators and owners and are committed to working with them to ensure they benefit from AI technology and new revenue models. Our ongoing conversations with the New York Times have been productive and moving forward constructively, so we are surprised and disappointed with this development. We’re hopeful that we will find a mutually beneficial way to work together, as we are doing with many other publishers.”</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Microsoft did not respond to a request for comment.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Microsoft is OpenAI’s biggest backer after committing up to $13bn to fuel the company’s growth and provide the huge technical infrastructure needed to create its AI models. OpenAI’s GPT technology also underpins Microsoft’s Bing Chat, a feature within the software giant’s search engine.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">News publishers around the world have been meeting AI companies including OpenAI, Microsoft and Google for several months in an effort to hammer out deals to license their content.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">This month, Germany’s Axel Springer struck a deal with OpenAI worth tens of millions of euros a year to let its AI systems use content from outlets such as Bild, Politico and Business Insider.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">The Times’s lawsuit alleges the company has held similar discussions with Microsoft and OpenAI “for months”. “These negotiations have not led to a resolution,” it stated.</span><br></div><div><br></div><div><span class="fs12lh1-5 ff1">The Times’s challenge is the latest in a series of lawsuits filed against OpenAI, alleging copyright infringement. In September, a group of bestselling authors including John Grisham, David Baldacci, Jonathan Franzen and George RR Martin sued the tech group, accusing its algorithms of being engaged in “systematic theft on a mass scale”.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Journalist and writer Julian Sancton filed a similar complaint the following month, and was soon joined by New Yorker writer Jia Tolentino, among others.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">While OpenAI’s lawyers have yet to respond to those two suits, they have responded to a proposed class action filed in California, arguing that some of the claims should be dismissed as its model can rely on the “fair use” doctrine. They claimed this doctrine had been interpreted by “numerous courts” to mean that the use of “copyrighted materials by innovators in transformative ways does not violate copyright”.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">OpenAI’s lawyers have also pointed to an order in a separate challenge brought against Meta’s AI model in California by comedian Sarah Silverman and writer Ta-Nehisi Coates, among others, in which the court found that the output of the company’s large language model was not “substantially similar” to the books written by the plaintiffs.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Shares in The New York Times Company rose about 1 per cent on Wednesday morning.</span></div></div>]]></description>
			<pubDate>Mon, 01 Jan 2024 11:45:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/qw_thumb.jpg" length="629624" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?new-york-times-sues-microsoft-and-openai-in-copyright-case</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000011</guid>
		</item>
		<item>
			<title><![CDATA[Beware AI’s hidden costs before they bankrupt innovation]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000010"><div><span class="fs12lh1-5 ff1">This year, the median household income for home buyers jumped to $107,000 from $88,000 last year, according to the National Association of Realtors. The volume of homes for sale in the U.S. reached a record a low, meanwhile — and shows no sign of recovery.</span><br><br><span class="fs12lh1-5 ff1">Now, one might argue the increasing price and interrelated decreasing supply of homes are positive trends, in fact, because they could push families toward more environmentally friendly, sustainable alternatives. Studies show that single-</span><wbr><span class="fs12lh1-5 ff1">family suburbs contribute significant greenhouse gas emissions while discouraging affordable new housing.</span><br><br><span class="fs12lh1-5 ff1">But startups such as BotBuilt make the case that prospective homebuyers can have their cake and eat it, too, by embracing tech to lower the cost — and mitigate the negative impacts — of homebuilding.</span><br><br><span class="fs12lh1-5 ff1">BotBuilt is the brainchild of Brent Wadas, Colin Devine and robotics engineer Barrett Ames. Founded in 2020, the company aims to create a robotic system that can take in a building plan, translate that plan into a series of machine commands and send those commands to the aforementioned system.</span><br><br><span class="fs12lh1-5 ff1">What inspired the co-</span><wbr><span class="fs12lh1-5 ff1">founders to tackle homebuilding? Personal experience, according to Ames. While a graduate student at Duke, Ames and his wife bought a fixer-</span><wbr><span class="fs12lh1-5 ff1">upper near the college campus and recruited friends and family to help renovate the house. Throughout the remodel, Ames says he learned a lot about the challenges — and patterns — of construction.</span><br><br><span class="fs12lh1-5 ff1">“The housing industry is facing a huge housing shortage, and builders know they have to continue to build as many homes as possible to make up for years of underbuilding,” Ames told TechCrunch in an email interview. “Because of the increase in interest rates, many people do not want to leave their current homes and associated rates, further increasing the demand for new housing.”</span><br><br><span class="fs12lh1-5 ff1">Now, BotBuilt’s envisioned system doesn’t erect homes from scratch. It focuses instead on a specific part of the homebuilding “flow”: constructing framing.</span><br><br><span class="fs12lh1-5 ff1">BotBuilt’s robotics piece together panels for walls, floor trusses and roof trusses, several of the major framing components of homes. The company’s system, which ostensibly costs around $1 per hour to run, can be reprogrammed to build “entirely” different frame designs for homes relatively quickly, Ames says.</span><br><img class="image-0" src="http://asianheritagesociety.org/images/BotBuiltTeam-transformed.jpg"  title="" alt="" width="810" height="541" /><br><span class="fs12lh1-5 ff1">“The flexibility of our robotic systems is our … big advantage,” Ames said. “Prior attempts to use robots to innovate within construction have largely relied on hard automation, which means that robots are programmed to do the same task over and over again. This approach works well for repetitive tasks like building cars, but it’s a poor fit for the construction industry, where there’s a huge variety of designs.”</span><br><br><span class="fs12lh1-5 ff1">By automating the framing step, it’s Ames’ theory that the pace of homebuilding can be dramatically accelerated while reducing costs.</span><br><br><span class="fs12lh1-5 ff1">Typically, house framing costs $7 to $16 per square foot, which includes $4 to $10 in framing labor costs. Framing takes about a month, best-</span><wbr><span class="fs12lh1-5 ff1">case scenario, but factors like bad weather can delay things — as can labor shortages. According to the National Association of Home Builders, more than 55% of single-</span><wbr><span class="fs12lh1-5 ff1">family homebuilders reported a shortage of skilled labor across homebuilding trades, including framers, in 2021.</span><br><br><span class="fs12lh1-5 ff1">BotBuilt primarily provides services to homebuilders. It doesn’t sell the frame-</span><wbr><span class="fs12lh1-5 ff1">building system itself, but rather operates robot-</span><wbr><span class="fs12lh1-5 ff1">equipped factories to produce framing for homebuilding customers.</span><br><br><span class="fs12lh1-5 ff1">“The timing of framing impacts every other trade involved in the construction process and can make or break a developer’s budget,” Ames said. “The vast majority of … framing components are built by people using manual methods … BotBuilt empowers builders by helping them increase both their volume and margin by leveraging plentiful, high-</span><wbr><span class="fs12lh1-5 ff1">quality and affordable robotic labor.”</span><br><br><span class="fs12lh1-5 ff1">Ames acknowledges that BotBuilt has rivals in the robotics homebuilding space, like Randek, Weinmann and House of Design. Others include Diamond Age and Mighty Homes, both of which have created systems that can print and assemble components like home interiors and roof structures.</span><br><br><span class="fs12lh1-5 ff1">BotBuilt is off to a gentle start, with only nine homes built so far and revenue hovering around $75,000. But Ames claims the pace will ramp up in 2024; the plan is to begin shipping trusses built by its robotics while scaling BotBuilt’s general operations, he says.</span><br><br><span class="fs12lh1-5 ff1">“Manual wall panel and truss plants operate at 30-</span><wbr><span class="fs12lh1-5 ff1">40% gross margins, so our level of automation will allow us to be significantly higher than that and still deliver significant cost savings to builders,” Ames says. (He estimates that BotBuilt makes ~$15,000 in revenue per house of wall panels built.) “We already have ten builders with over 2,000 homes and apartment units in our pipeline to build, and we will build them as quickly as we can with our initial two factories.”</span><br><br><span class="fs12lh1-5 ff1">To help scale the company, BotBuilt has raised $12.4 million in a seed funding round; previous investors include Ambassador Supply, Y Combinator, Owens Corning and Shadow Ventures. Part of the tranche, which values BotBuilt at $35 million post-</span><wbr><span class="fs12lh1-5 ff1">money, will be put toward growing the Durham, North Carolina-</span><wbr><span class="fs12lh1-5 ff1">based company’s team from 13 people to about 20, Ames says.</span></div></div>]]></description>
			<pubDate>Thu, 28 Dec 2023 11:31:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/GettyImages-1369169431_thumb.jpg" length="698572" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?beware-ai-s-hidden-costs-before-they-bankrupt-innovation</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000010</guid>
		</item>
		<item>
			<title><![CDATA[The New York Times wants OpenAI and Microsoft to pay for training data]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000000F"><div><span class="fs12lh1-5 ff1">The New York Times is suing OpenAI and its close collaborator (and investor), Microsoft, for allegedly violating copyright law by training generative AI models on Times’ content.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">In the lawsuit, filed in the Federal District Court in Manhattan, The Times contends that millions of its articles were used to train AI models, including those underpinning OpenAI’s ultra-</span></div><div><wbr></div><div><span class="fs12lh1-5 ff1">popular ChatGPT and Microsoft’s Copilot, without its consent. The Times is calling for OpenAI and Microsoft to “destroy” models and training data containing the offending material and to be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.”</span></div><div><br></div><div><span class="fs12lh1-5 ff1">“If The Times and other news organizations cannot produce and protect their independent journalism, there will be a vacuum that no computer or artificial intelligence can fill,” reads The Times’ complaint. “Less journalism will be produced, and the cost to society will be enormous.”</span></div><div><br></div><div><span class="fs12lh1-5 ff1">In an emailed statement, an OpenAI spokesperson said: “We respect the rights of content creators and owners and are committed to working with them to ensure they benefit from AI technology and new revenue models. Our ongoing conversations with The New York Times have been productive and moving forward constructively, so we are surprised and disappointed with this development. We’re hopeful that we will find a mutually beneficial way to work together, as we are doing with many other publishers.”</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Generative AI models “learn” from examples to craft essays, code, emails, articles and more, and vendors like OpenAI scrape the web for millions to billions of these examples to add to their training sets. Some examples are in the public domain. Others aren’t, or come under restrictive licenses that require citation or specific forms of compensation.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Vendors argue fair use doctrine provides a blanket protection for their web-</span></div><div><wbr></div><div><span class="fs12lh1-5 ff1">scraping practices. Copyright holders disagree; hundreds of news organizations are now using code to prevent OpenAI, Google and others from scanning their websites for training data.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">The vendor-</span></div><div><span class="fs12lh1-5 ff1">outlet conflict has led to a growing number of legal battles, The Times’ being the latest.</span><wbr></div><div><br></div><div><span class="fs12lh1-5 ff1">Actress Sarah Silverman joined a pair of lawsuits in July that accuse Meta and OpenAI of having “ingested” Silverman’s memoir to train their AI models. In a separate suit, thousands of novelists, including Jonathan Franzen and John Grisham, claim OpenAI sourced their work as training data without their permission or knowledge. And several programmers have an ongoing case against Microsoft, OpenAI and GitHub over Copilot, an AI-</span></div><div><wbr></div><div><span class="fs12lh1-5 ff1">powered code-</span></div><div><wbr></div><div><span class="fs12lh1-5 ff1">generating tool, which the plaintiffs say was developed using their IP-</span></div><div><wbr></div><div><span class="fs12lh1-5 ff1">protected code.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">While The Times isn’t the first to sue generative AI vendors over alleged IP violations involving written works, it’s the largest publisher involved in such a suit to date — and one of the first to highlight potential damage to its brand through “hallucinations,” or made-</span></div><div><wbr></div><div><span class="fs12lh1-5 ff1">up facts from generative AI models.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">The Times’ complaint cites several cases in which Microsoft’s Bing Chat (now called Copilot), which is underpinned by an OpenAI model, provided incorrect information that was said to have come from The Times — including results for “the 15 most heart-</span></div><div><wbr></div><div><span class="fs12lh1-5 ff1">healthy foods,” 12 of which weren’t mentioned in any Times article.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">The Times makes the case, also, that OpenAI and Microsoft are effectively building news publisher competitors using The Times’ works, harming The Times’ business by providing information that couldn’t normally be accessed without a subscription — information that isn’t always cited, sometimes monetized and stripped of affiliate links that The Times uses to generate commissions, moreover.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">As The Times’ complaint alludes to, generative AI models have a tendency to regurgitate training data, for example reproducing almost verbatim results from &nbsp;articles. Beyond regurgitation, OpenAI has on at least one occasion inadvertently enabled ChatGPT users to get around paywalled news content.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">“Defendants seek to free-</span></div><div><wbr></div><div><span class="fs12lh1-5 ff1">ride on The Times’s massive investment in its journalism,” the complaint says, accusing OpenAI and Microsoft of “using The Times’s content without payment to create products that substitute for The Times and steal audiences away from it.”</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Impacts to the news subscription business — and publisher web traffic — is at the heart of a tangentially similar suit filed by publishers earlier in the month against Google. In the case, the defendants, like The Times, argued Google’s GenAI experiments, including its AI-</span></div><div><wbr></div><div><span class="fs12lh1-5 ff1">powered Bard chatbot and Search Generative Experience, siphon off publishers’ content, readers and ad revenue through anticompetitive means.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">There’s credence to publishers’ assertions. A recent model from The Atlantic found that, if a search engine like Google were to integrate AI into search, it’d answer a user’s query 75% of the time without requiring a click-</span></div><div><wbr></div><div><span class="fs12lh1-5 ff1">through to its website. Publishers in the Google suit estimate they’d lose as much as 40% of their traffic.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">That doesn’t mean they’ll be successful in court. Heather Meeker, a founding partner at OSS Capital and an adviser on IP matters including licensing arrangements, compared The Times’ example of regurgitation to “using a word processor to cut and paste.”</span></div><div><br></div><div><span class="fs12lh1-5 ff1">“In the complaint, The New York Times gives an example of a ChatGPT session about a 2012 restaurant review,” Meeker told TechCrunch via email. “The prompt for ChatGPT is ‘What were the opening paragraphs of his review?’ The next prompts then repeatedly ask for ‘the next sentence.’ Teasing a chatbot into reproducing input is not a sensible basis for copyright infringement … If the user intentionally makes the chatbot copy, that’s the user’s fault. And that’s why most [lawsuits like this] will probably fail.”</span></div><div><br></div><div><span class="fs12lh1-5 ff1">Some news outlets, rather than fight generative AI vendors in court, have chosen to ink licensing agreements with them. The Associated Press struck a deal in July with OpenAI, and Axel Springer, the German publisher that owns Politico and Business Insider, did likewise this month.</span></div><div><br></div><div><span class="fs12lh1-5 ff1">In its complaint, The Times says that it attempted to reach a licensing arrangement with Microsoft and OpenAI in April but that talks weren’t ultimately fruitful.</span></div></div>]]></description>
			<pubDate>Thu, 28 Dec 2023 10:41:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/GettyImages-458591263_thumb.jpg" length="207534" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?the-new-york-times-wants-openai-and-microsoft-to-pay-for-training-data</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000000F</guid>
		</item>
		<item>
			<title><![CDATA[World's first AI-powered restaurant set to open in Southern California]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000000E"><div><span class="fs12lh1-5 ff1">PASADENA, Calif. -</span><wbr><span class="fs12lh1-5 ff1"> </span><span class="fs12lh1-5 ff1">The world's first fully autonomous restaurant is set to open in Southern California.</span><br><br><span class="fs12lh1-5 ff1">At ‘CaliExpress by Flippy’ robots are the chefs in the kitchen… both on the grill and at the fry station. They'll be cooking hamburgers, cheeseburgers and french fries.</span><br><br><span class="fs12lh1-5 ff1">Miso Robotics created Flippy which they say is the world's first AI-</span><wbr><span class="fs12lh1-5 ff1">powered robotic fry station. They say Flippy works alongside humans to "enhance quality and consistency, while creating substantial, measurable cost savings for restaurants."</span><br><br><span class="fs12lh1-5 ff1">The company claims by using Flippy, safety in the kitchen will increase as slippage and burns can be eliminated. The company also says Flippy can reduce food waste.</span><br><br><span class="fs12lh1-5 ff1">"The CaliExpress by Flippy kitchen can be run by a much smaller crew, in a less stressful environment, than competing restaurants — while also providing above average wages," a press release read.</span><br><br><span class="fs12lh1-5 ff1">In addition to Flippy, the restaurant will also use PopID techonolgy which the company says will help simplify ordering and paying and allow guests to get personalized order recommendations.</span><br><br><span class="fs12lh1-5 ff1">According to the company, ‘CaliExpress by Flippy’ will also give customers a museum-</span><wbr><span class="fs12lh1-5 ff1">like experience with dancing robot arms from retired Flippy units, experimental 3D-</span><wbr><span class="fs12lh1-5 ff1">printed artifacts from past development, and photographic displays.</span><br><br><span class="fs12lh1-5 ff1">‘CaliExpress by Flippy’ is located in downtown Pasadena at 561 E. Green St. It opens December 2023 by reservation only with a grand opening to follow later.</span></div></div>]]></description>
			<pubDate>Thu, 21 Dec 2023 10:38:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/flippy_thumb.jpg" length="134328" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?world-s-first-ai-powered-restaurant-set-to-open-in-southern-california</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000000E</guid>
		</item>
		<item>
			<title><![CDATA[Seeking a Big Edge in A.I., South Korean Firms Think Smaller.]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000000D"><div><b><span class="fs12lh1-5 ff1">While they lag behind their U.S. counterparts, their focus on non-</span><wbr><span class="fs12lh1-5 ff1">English. &nbsp;</span></b><span class="fs12lh1-5 ff1">languages could help loosen the American grip on artificial intelligence.</span><span class="fs12lh1-5 ff1">ChatGPT, Bard, Claude. The world’s most popular and successful chatbots are trained on data scraped from vast swaths of the internet, mirroring the cultural and linguistic dominance of the English language and Western perspectives. This has raised alarms about the lack of diversity in artificial intelligence. There is also the worry that the technology will remain the province of a handful of American companies.</span><br></div><div><br></div><div><span class="fs12lh1-5 ff1">In South Korea, a technological powerhouse, firms are taking advantage of the technology’s malleability to shape A.I. systems from the ground up to address local needs. Some have trained A.I. models with sets of data rich in Korean language and culture. South Korean companies say they’re building A.I. for Thai, Vietnamese and Malaysian audiences. Others are eyeing customers in Brazil, Saudi Arabia and the Philippines, and in industries like medicine and pharmacy.</span><br><br><span class="fs12lh1-5 ff1">This has fueled hopes that A.I. can become more diverse, work in more languages, be customized to more cultures and be developed by more countries.</span><br><br><span class="fs12lh1-5 ff1">“The more competition is out there, the more systems are going to be robust: socially acceptable, safer, more ethical,” said Byong-</span><wbr><span class="fs12lh1-5 ff1">Tak Zhang, a computer science professor at Seoul National University.</span><br><br><span class="fs12lh1-5 ff1">While there are some prominent non-</span><wbr><span class="fs12lh1-5 ff1">American A.I. companies, like France’s Mistral, the recent upheaval at OpenAI, the maker of ChatGPT, has highlighted how concentrated the industry remains.</span><br><br><span class="fs12lh1-5 ff1">The emerging A.I. landscape in South Korea is one of the most competitive and diverse in the world, said Yong Lim, a professor of law at Seoul National University who leads its AI Policy Initiative. The country’s export-</span><wbr><span class="fs12lh1-5 ff1">driven economy has encouraged new ventures to seek ways to tailor A.I. systems to specific companies or countries.</span><br><br><span class="fs12lh1-5 ff1">South Korea is well positioned to build A.I. technology, developers say, given it has one of the world’s most wired populations to generate vast amounts of data to train A.I. systems. Its tech giants have the resources to invest heavily in research. The government has also been encouraging: It</span><span class="fs12lh1-5 ff1"> </span><span class="fs12lh1-5 ff1">has provided companies with money and data that could be used to train large language models, the technology that powers A.I. chatbots.</span><br><br><span class="fs12lh1-5 ff1">Few other countries have the combination of capital and technology required to develop a large language model that can power a chatbot, experts say. They estimate that it costs $100 million to $200 million to build a foundational model, the technology that serves as the basis for A.I. chatbots.</span><br><br><img class="image-0" src="http://asianheritagesociety.org/images/skorea-ai-wlgp-superJumbo.jpg"  title="" alt="" width="401" height="600" /></div><div><span class="fs12lh1-5 ff1">South Korea is still months behind the United States in the A.I. race and may never fully catch up, as the leading chatbots continue to improve with more resources and data.</span><br><br><span class="fs12lh1-5 ff1">But South Korean companies believe they can compete. Instead of going after the global market like their American competitors, companies like Naver and LG have tried to target their A.I. models to specific industries, cultures or languages instead of pulling from the entire internet.</span><br><br><span class="fs12lh1-5 ff1">“The localized strategy is a reasonable strategy for them,” said Sukwoong Choi, a professor of information systems at the University at Albany. “U.S. firms are focused on general-</span><wbr><span class="fs12lh1-5 ff1">purpose tools. South Korean A.I. firms can target a specific area.”</span><br><br><span class="fs12lh1-5 ff1">Outside the United States, A.I. prowess appears limited in reach. In China, Baidu’s answer to ChatGPT, called Ernie, and Huawei’s large language model have shown some success at home, but they are far from dominating the global market. Governments and companies in other nations like Canada, Britain, India and Israel have also said they are developing their own A.I. systems, though none has yet to release a system that can be used by the public.</span><br><br><span class="fs12lh1-5 ff1">About a year before ChatGPT was released, Naver, which operates South Korea’s most widely used search engine, announced that it had successfully created a large language model. But the chatbot based on that model, Clova X, was released only this September, nearly a year after ChatGPT’s debut.</span><br><br><span class="fs12lh1-5 ff1">Clova X recognizes Korean idioms and the latest slang — language that American-</span><wbr><span class="fs12lh1-5 ff1">made chatbots like Bard, ChatGPT and Claude often struggle to understand. Naver’s chatbot is also integrated into the search engine, letting people use the tool to shop and travel.</span><br><br><span class="fs12lh1-5 ff1">Outside its home market, the company is exploring business opportunities with the Saudi Arabian government. Japan could be another potential customer, experts said, since Line, a messaging service owned by Naver, is used widely there.</span><br><br><span class="fs12lh1-5 ff1">LG has also created its own generative A.I. model, the type of artificial intelligence capable of creating original content based on inputs, called Exaone. Since its creation in 2021, LG has worked with publishers, research centers, pharmaceutical firms and medical companies to tailor its system to their data sets and provide them access to its A.I. system.</span><br><br><span class="fs12lh1-5 ff1">The company is targeting businesses and researchers instead of the general user, said Kyunghoon Bae, the director of LG A.I. Research. Its subsidiaries have also begun using its own A.I. chatbots. One of the chatbots, built to analyze chemistry research and chemical equations, has been used by researchers building new materials for batteries, chemicals and medicine.</span><br><br><span class="fs12lh1-5 ff1">“Rather than letting the best one or two A.I. systems dominate, it’s important to have an array of models specific to a domain, language or culture,” said Honglak Lee, the chief scientist of LG’s A.I. research arm.</span><br><br><span class="fs12lh1-5 ff1">Another South Korean behemoth, Samsung, last month announced Samsung Gauss, a generative A.I. model being used internally to compose emails, summarize documents and translate text. The company plans to integrate it into its mobile phones and smart home appliances.</span><br><br><span class="fs12lh1-5 ff1">Other major companies have also said they are developing their own large language models, making South Korea one of the few countries with so many companies building A.I. systems. KT, a South Korean telecommunications firm, has said it is working with a Thai counterpart, Jasmine Group, on a large language model specialized in the Thai language. Kakao, which makes an eponymous super app for chats, has said it is developing generative A.I. for Korean, English, Japanese, Vietnamese and Malaysian.</span><br><br><span class="fs12lh1-5 ff1">Still, the United States’ dominance in A.I. appears secure for now. It remains to be seen how closely countries can catch up.</span><br><br><span class="fs12lh1-5 ff1">“The market is convulsing; it’s very difficult to predict what’s going to happen,” said Mr. Lim, the A.I. policy expert. “It’s the Wild West, in a sense.”</span></div></div>]]></description>
			<pubDate>Wed, 20 Dec 2023 09:39:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/skorea-ai-gjct-superJumbo_thumb.jpg" length="70923" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?seeking-a-big-edge-in-a-i-,-south-korean-firms-think-smaller-</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000000D</guid>
		</item>
		<item>
			<title><![CDATA[AI & Big Data Expo: Ethical AI integration and future trends]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000019"><div class="imTAJustify">Grace Zheng, Data Analyst at <span class="cf1">Canon</span> and Founder of <span class="cf1">Kosh Duo</span>, recently sat down for an interview with AI News during <span class="cf1">AI & Big Data Expo Global</span> to discuss integrating AI ethically as well as provide her insights around future trends. </div><div class="imTAJustify">Zheng first explained how over a decade working in digital marketing and e-commerce sparked her interest more recently in data analytics and artificial intelligence as machine learning has become hugely popular.</div><div class="imTAJustify">At Canon, Zheng’s team focuses on ethically integrating AI into business by first mapping current and potential AI applications across areas like marketing and e-commerce. They then analyse and assess risks to ensure compliance with regulations.</div><div class="imTAJustify">Canon is actively mapping out AI applications and assessing risks, as Grace explained, “to align with regulations such as the EU legislations.”</div><div class="imTAJustify">As founder of Kosh Duo, Zheng also provides coaching to help businesses scale up through the use of AI marketing and data-driven approaches. She coaches professionals on achieving greater recognition and rewards by leveraging AI tools as well.</div><div class="imTAJustify">A key challenge she encounters is misunderstandings around what AI truly means – many conflate it solely with chatbots like ChatGPT rather than appreciating the full breadth of machine learning, neural networks, natural language processing, and more that enable today’s AI.</div><div class="imTAJustify">“There’s a lot of misconceptions, definitely. One of the biggest fears, as I touched on, is the very generic understanding that GPT equals AI,” says Zheng. “[Kosh Duo] provides coaching services to businesses to scale to the next level using AI marketing and data-driven approaches.”</div><div class="imTAJustify">When asked about trends to watch, Zheng emphasised the need for continual learning given how rapidly the field evolves. She expects that 2024 will be an “awakening year” where businesses truly grasp AI’s potential and individuals appreciate the need to evaluate their current skillsets.</div><div class="imTAJustify">The interview highlighted the transformative but often misunderstood power of AI in business and the importance of developing specialised skills to properly harness it. Zheng stressed that with the right ethical foundations and coaching, AI and machine learning can become positive forces to drive growth rather than something to fear.</div><div class="imTAJustify">Watch our full interview with Grace Zheng below:</div><div class="imTAJustify"><div class="imTACenter"><iframe width="560" height="315" src="https://www.youtube.com/embed/0jnRq4UPlCg?si=Qxw2dKQ1piD68TZ6" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></div></div></div>]]></description>
			<pubDate>Mon, 18 Dec 2023 13:16:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/expo_thumb.jpg" length="354011" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?ai---big-data-expo--ethical-ai-integration-and-future-trends</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000019</guid>
		</item>
		<item>
			<title><![CDATA[Using AI, MIT researchers identify a new class of antibiotic candidates]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000000C"><div><span class="fs12lh1-5 ff1">These compounds can kill methicillin-</span><wbr><span class="fs12lh1-5 ff1">resistant Staphylococcus aureus (MRSA), a bacterium that causes deadly infections.</span><br></div><div><br></div><div><span class="fs12lh1-5 ff1">Using a type of artificial intelligence known as deep learning, MIT researchers have discovered a class of compounds that can kill a drug-</span><wbr><span class="fs12lh1-5 ff1">resistant bacterium that causes more than 10,000 deaths in the United States every year.</span><br></div><div><br></div><div><span class="fs12lh1-5 ff1">In a study appearing today in Nature, the researchers showed that these compounds could kill methicillin-</span><wbr><span class="fs12lh1-5 ff1">resistant Staphylococcus aureus (MRSA) grown in a lab dish and in two mouse models of MRSA infection. The compounds also show very low toxicity against human cells, making them particularly good drug candidates.</span><br><br><span class="fs12lh1-5 ff1">A key innovation of the new study is that the researchers were also able to figure out what kinds of information the deep-</span><wbr><span class="fs12lh1-5 ff1">learning model was using to make its antibiotic potency predictions. This knowledge could help researchers to design additional drugs that might work even better than the ones identified by the model.</span><br><br><span class="fs12lh1-5 ff1">“The insight here was that we could see what was being learned by the models to make their predictions that certain molecules would make for good antibiotics. Our work provides a framework that is time-</span><wbr><span class="fs12lh1-5 ff1">efficient, resource-</span><wbr><span class="fs12lh1-5 ff1">efficient, and mechanistically insightful, from a chemical-</span><wbr><span class="fs12lh1-5 ff1">structure standpoint, in ways that we haven’t had to date,” says James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering.</span><br><br><span class="fs12lh1-5 ff1">Felix Wong, a postdoc at IMES and the Broad Institute of MIT and Harvard, and Erica Zheng, a former Harvard Medical School graduate student who was advised by Collins, are the lead authors of the study, which is part of the Antibiotics-</span><wbr><span class="fs12lh1-5 ff1">AI Project at MIT. The mission of this project, led by Collins, is to discover new classes of antibiotics against seven types of deadly bacteria, over seven years.</span><br><br><span class="fs12lh1-5 ff1">Explainable predictions</span><br><br><span class="fs12lh1-5 ff1">MRSA, which infects more than 80,000 people in the United States every year, often causes skin infections or pneumonia. Severe cases can lead to sepsis, a potentially fatal bloodstream infection.</span><br><br><span class="fs12lh1-5 ff1">Over the past several years, Collins and his colleagues in MIT’s Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) have begun using deep learning to try to find new antibiotics. Their work has yielded potential drugs against Acinetobacter baumannii, a bacterium that is often found in hospitals, and many other drug-</span><wbr><span class="fs12lh1-5 ff1">resistant bacteria.</span><br><br><span class="fs12lh1-5 ff1">These compounds were identified using deep learning models that can learn to identify chemical structures that are associated with antimicrobial activity. These models then sift through millions of other compounds, generating predictions of which ones may have strong antimicrobial activity.</span><br><br><span class="fs12lh1-5 ff1">These types of searches have proven fruitful, but one limitation to this approach is that the models are “black boxes,” meaning that there is no way of knowing what features the model based its predictions on. If scientists knew how the models were making their predictions, it could be easier for them to identify or design additional antibiotics.</span><br><br><span class="fs12lh1-5 ff1">“What we set out to do in this study was to open the black box,” Wong says. “These models consist of very large numbers of calculations that mimic neural connections, and no one really knows what's going on underneath the hood.”</span><br><br><span class="fs12lh1-5 ff1">First, the researchers trained a deep learning model using substantially expanded datasets. They generated this training data by testing about 39,000 compounds for antibiotic activity against MRSA, and then fed this data, plus information on the chemical structures of the compounds, into the model.</span><br><br><span class="fs12lh1-5 ff1">“You can represent basically any molecule as a chemical structure, and also you tell the model if that chemical structure is antibacterial or not,” Wong says. “The model is trained on many examples like this. If you then give it any new molecule, a new arrangement of atoms and bonds, it can tell you a probability that that compound is predicted to be antibacterial.”</span><br><br><span class="fs12lh1-5 ff1">To figure out how the model was making its predictions, the researchers adapted an algorithm known as Monte Carlo tree search, which has been used to help make other deep learning models, such as AlphaGo, more explainable. This search algorithm allows the model to generate not only an estimate of each molecule’s antimicrobial activity, but also a prediction for which substructures of the molecule likely account for that activity.</span><br><br><span class="fs12lh1-5 ff1">Potent activity</span><br><br><span class="fs12lh1-5 ff1">To further narrow down the pool of candidate drugs, the researchers trained three additional deep learning models to predict whether the compounds were toxic to three different types of human cells. By combining this information with the predictions of antimicrobial activity, the researchers discovered compounds that could kill microbes while having minimal adverse effects on the human body.</span><br><br><span class="fs12lh1-5 ff1">Using this collection of models, the researchers screened about 12 million compounds, all of which are commercially available. From this collection, the models identified compounds from five different classes, based on chemical substructures within the molecules, that were predicted to be active against MRSA.</span><br><br><span class="fs12lh1-5 ff1">The researchers purchased about 280 compounds and tested them against MRSA grown in a lab dish, allowing them to identify two, from the same class, that appeared to be very promising antibiotic candidates. In tests in two mouse models, one of MRSA skin infection and one of MRSA systemic infection, each of those compounds reduced the MRSA population by a factor of 10.</span><br><br><span class="fs12lh1-5 ff1">Experiments revealed that the compounds appear to kill bacteria by disrupting their ability to maintain an electrochemical gradient across their cell membranes. This gradient is needed for many critical cell functions, including the ability to produce ATP (molecules that cells use to store energy). An antibiotic candidate that Collins’ lab discovered in 2020, halicin, appears to work by a similar mechanism but is specific to Gram-</span><wbr><span class="fs12lh1-5 ff1">negative bacteria (bacteria with thin cell walls). MRSA is a Gram-</span><wbr><span class="fs12lh1-5 ff1">positive bacterium, with thicker cell walls.</span><br><br><span class="fs12lh1-5 ff1">“We have pretty strong evidence that this new structural class is active against Gram-</span><wbr><span class="fs12lh1-5 ff1">positive pathogens by selectively dissipating the proton motive force in bacteria,” Wong says. “The molecules are attacking bacterial cell membranes selectively, in a way that does not incur substantial damage in human cell membranes. Our substantially augmented deep learning approach allowed us to predict this new structural class of antibiotics and enabled the finding that it is not toxic against human cells.”</span><br><br><span class="fs12lh1-5 ff1">The researchers have shared their findings with Phare Bio, a nonprofit started by Collins and others as part of the Antibiotics-</span><wbr><span class="fs12lh1-5 ff1">AI Project. The nonprofit now plans to do more detailed analysis of the chemical properties and potential clinical use of these compounds. Meanwhile, Collins’ lab is working on designing additional drug candidates based on the findings of the new study, as well as using the models to seek compounds that can kill other types of bacteria.</span><br><br><span class="fs12lh1-5 ff1">“We are already leveraging similar approaches based on chemical substructures to design compounds de novo, and of course, we can readily adopt this approach out of the box to discover new classes of antibiotics against different pathogens,” Wong says.</span><br><br><span class="fs12lh1-5 ff1">In addition to MIT, Harvard, and the Broad Institute, the paper’s contributing institutions are Integrated Biosciences, Inc., the Wyss Institute for Biologically Inspired Engineering, and the Leibniz Institute of Polymer Research in Dresden, Germany. The research was funded by the James S. McDonnell Foundation, the U.S. National Institute of Allergy and Infectious Diseases, the Swiss National Science Foundation, the Banting Fellowships Program, the Volkswagen Foundation, the Defense Threat Reduction Agency, the U.S. National Institutes of Health, and the Broad Institute. The Antibiotics-</span><wbr><span class="fs12lh1-5 ff1">AI Project is funded by the Audacious Project, Flu Lab, the Sea Grape Foundation, the Wyss Foundation, and an anonymous donor.</span></div></div>]]></description>
			<pubDate>Wed, 13 Dec 2023 09:29:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/MIT-Antibiotic-Predictions-01_0_thumb.jpg" length="100853" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?using-ai,-mit-researchers-identify-a-new-class-of-antibiotic-candidates</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000000C</guid>
		</item>
		<item>
			<title><![CDATA[Deep neural networks show promise as models of human hearing]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000000B"><div>Study shows computational models trained to perform auditory tasks display an internal organization similar to that of the human auditory cortex.</div><div><br></div><div><div>Computational models that mimic the structure and function of the human auditory system could help researchers design better hearing aids, cochlear implants, and brain-machine interfaces. A new study from MIT has found that modern computational models derived from machine learning are moving closer to this goal.</div><div><br></div><div>In the largest study yet of deep neural networks that have been trained to perform auditory tasks, the MIT team showed that most of these models generate internal representations that share properties of representations seen in the human brain when people are listening to the same sounds.</div><div><br></div><div>The study also offers insight into how to best train this type of model: The researchers found that models trained on auditory input including background noise more closely mimic the activation patterns of the human auditory cortex.</div><div><br></div><div>“What sets this study apart is it is the most comprehensive comparison of these kinds of models to the auditory system so far. The study suggests that models that are derived from machine learning are a step in the right direction, and it gives us some clues as to what tends to make them better models of the brain,” says Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.</div><div><br></div><div>MIT graduate student Greta Tuckute and Jenelle Feather PhD ’22 are the lead authors of the open-access paper, which appears today in PLOS Biology.</div><div><br></div><div>Models of hearing</div><div><br></div><div>Deep neural networks are computational models that consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks. This type of model has become widely used in many applications, and neuroscientists have begun to explore the possibility that these systems can also be used to describe how the human brain performs certain tasks.</div><div><br></div><div>“These models that are built with machine learning are able to mediate behaviors on a scale that really wasn't possible with previous types of models, and that has led to interest in whether or not the representations in the models might capture things that are happening in the brain,” Tuckute says.</div><div><br></div><div>When a neural network is performing a task, its processing units generate activation patterns in response to each audio input it receives, such as a word or other type of sound. Those model representations of the input can be compared to the activation patterns seen in fMRI brain scans of people listening to the same input.</div><div><br></div><div>In 2018, McDermott and then-graduate student Alexander Kell reported that when they trained a neural network to perform auditory tasks (such as recognizing words from an audio signal), the internal representations generated by the model showed similarity to those seen in fMRI scans of people listening to the same sounds.</div><div><br></div><div>Since then, these types of models have become widely used, so McDermott’s research group set out to evaluate a larger set of models, to see if the ability to approximate the neural representations seen in the human brain is a general trait of these models.</div><div><br></div><div>For this study, the researchers analyzed nine publicly available deep neural network models that had been trained to perform auditory tasks, and they also created 14 models of their own, based on two different architectures. Most of these models were trained to perform a single task — recognizing words, identifying the speaker, recognizing environmental sounds, and identifying musical genre — while two of them were trained to perform multiple tasks.</div><div><br></div><div>When the researchers presented these models with natural sounds that had been used as stimuli in human fMRI experiments, they found that the internal model representations tended to exhibit similarity with those generated by the human brain. The models whose representations were most similar to those seen in the brain were models that had been trained on more than one task and had been trained on auditory input that included background noise.</div><div><br></div><div>“If you train models in noise, they give better brain predictions than if you don’t, which is intuitively reasonable because a lot of real-world hearing involves hearing in noise, and that’s plausibly something the auditory system is adapted to,” Feather says.</div><div><br></div><div>Hierarchical processing</div><div><br></div><div>The new study also supports the idea that the human auditory cortex has some degree of hierarchical organization, in which processing is divided into stages that support distinct computational functions. As in the 2018 study, the researchers found that representations generated in earlier stages of the model most closely resemble those seen in the primary auditory cortex, while representations generated in later model stages more closely resemble those generated in brain regions beyond the primary cortex.</div><div><br></div><div>Additionally, the researchers found that models that had been trained on different tasks were better at replicating different aspects of audition. For example, models trained on a speech-related task more closely resembled speech-selective areas.</div><div><br></div><div>“Even though the model has seen the exact same training data and the architecture is the same, when you optimize for one particular task, you can see that it selectively explains specific tuning properties in the brain,” Tuckute says.</div><div><br></div><div>McDermott’s lab now plans to make use of their findings to try to develop models that are even more successful at reproducing human brain responses. In addition to helping scientists learn more about how the brain may be organized, such models could also be used to help develop better hearing aids, cochlear implants, and brain-machine interfaces.</div><div><br></div><div>“A goal of our field is to end up with a computer model that can predict brain responses and behavior. We think that if we are successful in reaching that goal, it will open a lot of doors,” McDermott says.</div><div><br></div><div>The research was funded by the National Institutes of Health, an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, an MIT Friends of McGovern Institute Fellowship, a fellowship from the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, and a Department of Energy Computational Science Graduate Fellowship.</div><div><br></div></div></div>]]></description>
			<pubDate>Tue, 12 Dec 2023 09:23:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/MIT_Auditory-Models-01_0_thumb.jpg" length="60631" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?deep-neural-networks-show-promise-as-models-of-human-hearing</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000000B</guid>
		</item>
		<item>
			<title><![CDATA[Automated system teaches users when to collaborate with an AI assistant]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000000A"><div>Artificial intelligence models that pick out patterns in images can often do so better than human eyes — but not always. If a radiologist is using an AI model to help her determine whether a patient’s X-rays show signs of pneumonia, when should she trust the model’s advice and when should she ignore it?</div><div><br></div><div>A customized onboarding process could help this radiologist answer that question, according to researchers at MIT and the MIT-IBM Watson AI Lab. They designed a system that teaches a user when to collaborate with an AI assistant.</div><div><br></div><div>In this case, the training method might find situations where the radiologist trusts the model’s advice — except she shouldn’t because the model is wrong. The system automatically learns rules for how she should collaborate with the AI, and describes them with natural language.</div><div><br></div><div>During onboarding, the radiologist practices collaborating with the AI using training exercises based on these rules, receiving feedback about her performance and the AI’s performance.</div><div><br></div><div>The researchers found that this onboarding procedure led to about a 5 percent improvement in accuracy when humans and AI collaborated on an image prediction task. Their results also show that just telling the user when to trust the AI, without training, led to worse performance.</div><div><br></div><div>Importantly, the researchers’ system is fully automated, so it learns to create the onboarding process based on data from the human and AI performing a specific task. It can also adapt to different tasks, so it can be scaled up and used in many situations where humans and AI models work together, such as in social media content moderation, writing, and programming.</div><div><br></div><div>“So often, people are given these AI tools to use without any training to help them figure out when it is going to be helpful. That’s not what we do with nearly every other tool that people use — there is almost always some kind of tutorial that comes with it. But for AI, this seems to be missing. We are trying to tackle this problem from a methodological and behavioral perspective,” says Hussein Mozannar, a graduate student in the Social and Engineering Systems doctoral program within the Institute for Data, Systems, and Society (IDSS) and lead author of a paper about this training process.</div><div><br></div><div>The researchers envision that such onboarding will be a crucial part of training for medical professionals.</div><div><br></div><div>“One could imagine, for example, that doctors making treatment decisions with the help of AI will first have to do training similar to what we propose. We may need to rethink everything from continuing medical education to the way clinical trials are designed,” says senior author David Sontag, a professor of EECS, a member of the MIT-IBM Watson AI Lab and the MIT Jameel Clinic, and the leader of the Clinical Machine Learning Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).</div><div><br></div><div>Mozannar, who is also a researcher with the Clinical Machine Learning Group, is joined on the paper by Jimin J. Lee, an undergraduate in electrical engineering and computer science; Dennis Wei, a senior research scientist at IBM Research; and Prasanna Sattigeri and Subhro Das, research staff members at the MIT-IBM Watson AI Lab. The paper will be presented at the Conference on Neural Information Processing Systems.</div><div><br></div><div>Training that evolves</div><div><br></div><div>Existing onboarding methods for human-AI collaboration are often composed of training materials produced by human experts for specific use cases, making them difficult to scale up. Some related techniques rely on explanations, where the AI tells the user its confidence in each decision, but research has shown that explanations are rarely helpful, Mozannar says.</div><div><br></div><div>“The AI model’s capabilities are constantly evolving, so the use cases where the human could potentially benefit from it are growing over time. At the same time, the user’s perception of the model continues changing. So, we need a training procedure that also evolves over time,” he adds.</div><div><br></div><div>To accomplish this, their onboarding method is automatically learned from data. It is built from a dataset that contains many instances of a task, such as detecting the presence of a traffic light from a blurry image.</div><div><br></div><div>The system’s first step is to collect data on the human and AI performing this task. In this case, the human would try to predict, with the help of AI, whether blurry images contain traffic lights.</div><div><br></div><div>The system embeds these data points onto a latent space, which is a representation of data in which similar data points are closer together. It uses an algorithm to discover regions of this space where the human collaborates incorrectly with the AI. These regions capture instances where the human trusted the AI’s prediction but the prediction was wrong, and vice versa.</div><div><br></div><div>Perhaps the human mistakenly trusts the AI when images show a highway at night.</div><div><br></div><div>After discovering the regions, a second algorithm utilizes a large language model to describe each region as a rule, using natural language. The algorithm iteratively fine-tunes that rule by finding contrasting examples. It might describe this region as “ignore AI when it is a highway during the night.”</div><div><br></div><div>These rules are used to build training exercises. The onboarding system shows an example to the human, in this case a blurry highway scene at night, as well as the AI’s prediction, and asks the user if the image shows traffic lights. The user can answer yes, no, or use the AI’s prediction.</div><div><br></div><div>If the human is wrong, they are shown the correct answer and performance statistics for the human and AI on these instances of the task. The system does this for each region, and at the end of the training process, repeats the exercises the human got wrong.</div><div><br></div><div>“After that, the human has learned something about these regions that we hope they will take away in the future to make more accurate predictions,” Mozannar says.</div><div><br></div><div>Onboarding boosts accuracy</div><div><br></div><div>The researchers tested this system with users on two tasks — detecting traffic lights in blurry images and answering multiple choice questions from many domains (such as biology, philosophy, computer science, etc.).</div><div><br></div><div>They first showed users a card with information about the AI model, how it was trained, and a breakdown of its performance on broad categories. Users were split into five groups: Some were only shown the card, some went through the researchers’ onboarding procedure, some went through a baseline onboarding procedure, some went through the researchers’ onboarding procedure and were given recommendations of when they should or should not trust the AI, and others were only given the recommendations.</div><div><br></div><div>Only the researchers’ onboarding procedure without recommendations improved users’ accuracy significantly, boosting their performance on the traffic light prediction task by about 5 percent without slowing them down. However, onboarding was not as effective for the question-answering task. The researchers believe this is because the AI model, ChatGPT, provided explanations with each answer that convey whether it should be trusted.</div><div><br></div><div>But providing recommendations without onboarding had the opposite effect — users not only performed worse, they took more time to make predictions.</div><div><br></div><div>“When you only give someone recommendations, it seems like they get confused and don’t know what to do. It derails their process. People also don’t like being told what to do, so that is a factor as well,” Mozannar says.</div><div><br></div><div>Providing recommendations alone could harm the user if those recommendations are wrong, he adds. With onboarding, on the other hand, the biggest limitation is the amount of available data. If there aren’t enough data, the onboarding stage won’t be as effective, he says.</div><div><br></div><div>In the future, he and his collaborators want to conduct larger studies to evaluate the short- and long-term effects of onboarding. They also want to leverage unlabeled data for the onboarding process, and find methods to effectively reduce the number of regions without omitting important examples.</div><div><br></div><div>“People are adopting AI systems willy-nilly, and indeed AI offers great potential, but these AI agents still sometimes makes mistakes. Thus, it’s crucial for AI developers to devise methods that help humans know when it’s safe to rely on the AI’s suggestions,” says Dan Weld, professor emeritus at the Paul G. Allen School of Computer Science and Engineering at the University of Washington, who was not involved with this research. “Mozannar et al. have created an innovative method for identifying situations where the AI is trustworthy, and (importantly) to describe them to people in a way that leads to better human-AI team interactions.”</div><div><br></div><div>This work is funded, in part, by the MIT-IBM Watson AI Lab.</div><div><br></div></div>]]></description>
			<pubDate>Fri, 08 Dec 2023 09:10:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/MIT-AI-Onboarding-01-PRESS_0_thumb.jpg" length="60621" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?automated-system-teaches-users-when-to-collaborate-with-an-ai-assistant</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000000A</guid>
		</item>
		<item>
			<title><![CDATA[This 3D printer can watch itself fabricate objects]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000009"></div>]]></description>
			<pubDate>Thu, 16 Nov 2023 09:07:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/MIT-VisionJetting-01-press_0_thumb.jpg" length="30593" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?this-3d-printer-can-watch-itself-fabricate-objects</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000009</guid>
		</item>
		<item>
			<title><![CDATA[Kinetic Consulting launches Macky AI – the first AI business consulting platform available to any business]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000001F"><div class="imTAJustify"><span class="cf1">Kinetic Consulting</span>, the leading boutique consulting company providing business growth consultancy, has released <span class="cf1">macky.ai</span>, the first AI business consulting platform that offers any organisation an easy, non-prompt-based AI consulting solution for up to 55 different business categories. The platform is powered by OpenAI’s artificial intelligence technology.</div><div>What is Macky AI?</div><div class="imTAJustify">The <span class="cf1">Macky AI</span> platform overcomes some of the key hurdles preventing the mass adoption of AI in a business environment. No training is required for employees to begin taking advantage of the consulting platform. No knowledge is required on how to prompt the AI to get the right output or determine if the output is suitable. The hard work of determining which prompt to use or if the output is suitable has already been determined by the software’s creator, Kinetic Consulting. Platform users are asked a maximum of three questions to generate the desired output.</div><div class="imTAJustify">The creators of the <span class="cf1">Macky AI</span> software have curated the types of everyday requirements of key departments in a business and what type of suitable output can be generated from a generative AI solution. An example may be generating something as simple as a job description for a new employee or something more complex, such as creating a new business process or reengineering an existing one. These types of requests are often requested for consultants to perform. Macky AI aims to reduce the cost of everyday consulting needs for companies so they can empower their employees to complete these tasks without the need for costly consultants.</div><div class="imTAJustify">By freeing up the costs paid for these lower-level activities, companies can now divert effort and funds to develop higher-value business initiatives, such as business roadmaps and growth strategy plans. These higher value and more complex business requirements will remain better suited for traditional consulting. The Macky AI platform is unique because it also provides its users with traditional consultants for more complex needs. The ability for an organisation to have the best of both worlds, all on one platform, is made possible on <span class="cf1">Macky AI</span>. The future of consulting will be the augmentation of AI and human consultants.</div><div>Macky AI provides new consulting options for SMEs</div><div class="imTAJustify">A 2023 report by the OECD[1] on the outlook of SMEs in OECD countries highlights that the majority are currently operating in highly challenging environments. The report cites that SMEs have been greatly impacted by the COVID-19 pandemic, rising geopolitical tensions, high inflation, tighter monetary and fiscal policy, and supply-chain disruptions. Retaining and attracting staff has also become a major issue for SMEs in OECD countries. According to a Future of Business Survey, it is reported to be the second most pressing challenge faced by SMEs that are older than two years in the first quarter of 2022[2]. Many SMEs have also depleted their cash reserves during the pandemic and now find it challenging to raise capital for their business to fuel the rising costs of goods and services and the capital required for digital transformation projects.</div><div class="imTAJustify">Outside of the OECD, we find the importance of a thriving SME ecosystem even more critical. In the Gulf region, SMEs contribute even more to the economy than their counterparts in OECD countries. Within the UAE, for example, SMEs represent 94% of the companies and institutions operating in the country, contributing more than 50% to the country’s GDP. SMEs account for 86% of the private sector’s workforce. Operating extensively throughout the rest of the GCC, SMEs employ 80% of Saudia Arabia, 43% of Oman’s workforce, 57% of Bahrain, 23% of Kuwait, and 20% of Qatar.</div><div class="imTAJustify">The importance of having healthy and thriving SMEs is recognised as the primary pillar of strength for any economy. The challenging environment and rising cost of capital make it difficult for SMEs to afford traditional consulting. Ironically, this is the time when consulting is most needed to help SMEs navigate, transform, and thrive again. <span class="cf1">Macky AI</span> gives SMEs affordable access to consulting services using artificial intelligence. The AI business consulting platform provides an on-demand service for key business challenges, such as analysing a profit and loss statement to identify cost savings and developing a 12-month marketing plan to increase sales.</div><div>The future of consulting</div><div class="imTAJustify">Business consulting, like most industries, is undergoing a period of disruption. Technological advances, such as artificial intelligence, are accelerating the delivery of consulting in the future. Critics of the technology may highlight how AI is not 100% accurate in its outputs and is prone to error, so it should not be used. This argument is fundamentally flawed because even human-based consulting is prone to errors. All outputs delivered by human or AI consultants should be checked for accuracy. The advancement of generative AI technology has reached a point where it is now highly useful in a business or education environment.</div><div class="imTAJustify">AI technologies should be embraced rather than resisted if they are fit for purpose. <span class="cf1">Macky AI</span> is designed to be specifically for business-related needs, and even in the open question section of the platform, the AI has been programmed not to answer questions that are not business-related. The objective of restricting it for business purposes only is to ensure that if employers give it to their employees, it will not be used for personal needs.</div><div class="imTAJustify">“As advancements in AI evolve, we need to accept that it will become a natural part of how we interact with things, get answers to our questions, and help solve complex problems. The future of consulting will be an augmentation between AI and human consultants. This is the inevitable evolutionary path. The percentages of AI usage versus human is unknown at this stage. However, I am 100% confident it will not be all traditional human consulting for much longer. Macky AI is the first step towards bringing AI into the workplace in a controlled environment for a specific business purpose. By empowering SMEs with affordable consulting outputs for business tasks, we are also helping SMEs overcome everyday business challenges and thrive in the future. Macky AI is designed to democratise consulting, making it accessible to all organisations regardless of size.” said Joe Tawfik, founder of Macky AI.</div><div class="imTAJustify">[1] OECD (2023), <em>OECD SME and Entrepreneurship Outlook 2023</em>, OECD Publishing, Paris, <span class="cf1">https://doi.org/10.1787/342b8564-en</span>.</div><div class="imTAJustify">[2] OECD-World Bank-Meta Future of Business Survey, <span class="cf1">Data for Good</span>, (March 2022).</div><div class="imTAJustify"><em>(Editor’s note: This article is sponsored by <span class="cf1">Kinetic Consulting</span>)</em></div></div>]]></description>
			<pubDate>Thu, 16 Nov 2023 00:25:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/macky_thumb.jpg" length="74866" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?kinetic-consulting-launches-macky-ai---the-first-ai-business-consulting-platform-available-to-any-business</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000001F</guid>
		</item>
		<item>
			<title><![CDATA[AI model speeds up high-resolution computer vision]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000005C"><div>An autonomous vehicle must rapidly and accurately recognize objects that it encounters, from an idling delivery truck parked at the corner to a cyclist whizzing toward an approaching intersection.</div><div><br></div><div>To do this, the vehicle might use a powerful computer vision model to categorize every pixel in a high-resolution image of this scene, so it doesn’t lose sight of objects that might be obscured in a lower-quality image. But this task, known as semantic segmentation, is complex and requires a huge amount of computation when the image has high resolution.</div><div><br></div><div>Researchers from MIT, the MIT-IBM Watson AI Lab, and elsewhere have developed a more efficient computer vision model that vastly reduces the computational complexity of this task. Their model can perform semantic segmentation accurately in real-time on a device with limited hardware resources, such as the on-board computers that enable an autonomous vehicle to make split-second decisions.</div><div><br></div><div><div>Recent state-of-the-art semantic segmentation models directly learn the interaction between each pair of pixels in an image, so their calculations grow quadratically as image resolution increases. Because of this, while these models are accurate, they are too slow to process high-resolution images in real time on an edge device like a sensor or mobile phone.</div><div><br></div><div>The MIT researchers designed a new building block for semantic segmentation models that achieves the same abilities as these state-of-the-art models, but with only linear computational complexity and hardware-efficient operations.</div><div><br></div><div><div class="imTACenter"><iframe width="560" height="315" src="https://www.youtube.com/embed/9vjyMCE-IbI?si=npKoII6yWdlLu7Hu" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></div></div><div><br></div><div>The result is a new model series for high-resolution computer vision that performs up to nine times faster than prior models when deployed on a mobile device. Importantly, this new model series exhibited the same or better accuracy than these alternatives.</div><div><br></div><div>Not only could this technique be used to help autonomous vehicles make decisions in real-time, it could also improve the efficiency of other high-resolution computer vision tasks, such as medical image segmentation.</div><div><br></div><div>“While researchers have been using traditional vision transformers for quite a long time, and they give amazing results, we want people to also pay attention to the efficiency aspect of these models. Our work shows that it is possible to drastically reduce the computation so this real-time image segmentation can happen locally on a device,” says Song Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT-IBM Watson AI Lab, and senior author of the <span class="cf1">paper</span> describing the new model.</div><div><br></div><div>He is joined on the paper by lead author Han Cai, an EECS graduate student; Junyan Li, an undergraduate at Zhejiang University; Muyan Hu, an undergraduate student at Tsinghua University; and Chuang Gan, a principal research staff member at the MIT-IBM Watson AI Lab. The research will be presented at the International Conference on Computer Vision.</div><div><br></div><div><strong>A simplified solution</strong></div><div>Categorizing every pixel in a high-resolution image that may have millions of pixels is a difficult task for a machine-learning model. A powerful new type of model, known as a vision transformer, has recently been used effectively.</div><div><br></div><div>Transformers were originally developed for natural language processing. In that context, they encode each word in a sentence as a token and then generate an attention map, which captures each token’s relationships with all other tokens. This attention map helps the model understand context when it makes predictions.</div><div><br></div><div>Using the same concept, a vision transformer chops an image into patches of pixels and encodes each small patch into a token before generating an attention map. In generating this attention map, the model uses a similarity function that directly learns the interaction between each pair of pixels. In this way, the model develops what is known as a global receptive field, which means it can access all the relevant parts of the image.</div><div>Since a high-resolution image may contain millions of pixels, chunked into thousands of patches, the attention map quickly becomes enormous. Because of this, the amount of computation grows quadratically as the resolution of the image increases.</div><div><br></div><div>In their new model series, called EfficientViT, the MIT researchers used a simpler mechanism to build the attention map — replacing the nonlinear similarity function with a linear similarity function. As such, they can rearrange the order of operations to reduce total calculations without changing functionality and losing the global receptive field. With their model, the amount of computation needed for a prediction grows linearly as the image resolution grows.</div><div><br></div><div>“But there is no free lunch. The linear attention only captures global context about the image, losing local information, which makes the accuracy worse,” Han says.</div><div><br></div><div>To compensate for that accuracy loss, the researchers included two extra components in their model, each of which adds only a small amount of computation.</div><div><br></div><div>One of those elements helps the model capture local feature interactions, mitigating the linear function’s weakness in local information extraction. The second, a module that enables multiscale learning, helps the model recognize both large and small objects.</div><div><br></div><div>“The most critical part here is that we need to carefully balance the performance and the efficiency,” Cai says.</div><div><br></div><div>They designed EfficientViT with a hardware-friendly architecture, so it could be easier to run on different types of devices, such as virtual reality headsets or the edge computers on autonomous vehicles. Their model could also be applied to other computer vision tasks, like image classification.</div><div><br></div><div><strong>Streamlining semantic segmentation</strong></div><div>When they tested their model on datasets used for semantic segmentation, they found that it performed up to nine times faster on a Nvidia graphics processing unit (GPU) than other popular vision transformer models, with the same or better accuracy.</div><div><br></div><div>“Now, we can get the best of both worlds and reduce the computing to make it fast enough that we can run it on mobile and cloud devices,” Han says.</div><div><br></div><div>Building off these results, the researchers want to apply this technique to speed up generative machine-learning models, such as those used to generate new images. They also want to continue scaling up EfficientViT for other vision tasks.</div><div><br></div><div>“Efficient transformer models, pioneered by Professor Song Han’s team, now form the backbone of cutting-edge techniques in diverse computer vision tasks, including detection and segmentation,” says Lu Tian, senior director of AI algorithms at AMD, Inc., who was not involved with this paper. “Their research not only showcases the efficiency and capability of transformers, but also reveals their immense potential for real-world applications, such as enhancing image quality in video games.”</div><div><br></div><div>“Model compression and light-weight model design are crucial research topics toward efficient AI computing, especially in the context of large foundation models. Professor Song Han’s group has shown remarkable progress compressing and accelerating modern deep learning models, particularly vision transformers,” adds Jay Jackson, global vice president of artificial intelligence and machine learning at Oracle, who was not involved with this research. “Oracle Cloud Infrastructure has been supporting his team to advance this line of impactful research toward efficient and green AI.”</div></div></div>]]></description>
			<pubDate>Wed, 15 Nov 2023 10:35:00 GMT</pubDate>
			<link>http://asianheritagesociety.org/blog/?ai-model-speeds-up-high-resolution-computer-vision</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000005C</guid>
		</item>
		<item>
			<title><![CDATA[Generating opportunities with generative AI]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000008"><div>Rama Ramakrishnan helps companies explore the promises and perils of large language models and other transformative AI technologies.</div><div><br></div><div>Talking with retail executives back in 2010, Rama Ramakrishnan came to two realizations. First, although retail systems that offered customers personalized recommendations were getting a great deal of attention, these systems often provided little payoff for retailers. Second, for many of the firms, most customers shopped only once or twice a year, so companies didn't really know much about them.</div><div><br></div><div>“But by being very diligent about noting down the interactions a customer has with a retailer or an e-commerce site, we can create a very nice and detailed composite picture of what that person does and what they care about,” says Ramakrishnan, professor of the practice at the MIT Sloan School of Management. “Once you have that, then you can apply proven algorithms from machine learning.”</div><div><br></div><div>These realizations led Ramakrishnan to found CQuotient, a startup whose software has now become the foundation for Salesforce's widely adopted AI e-commerce platform. “On Black Friday alone, CQuotient technology probably sees and interacts with over a billion shoppers on a single day,” he says.</div><div><br></div><div>After a highly successful entrepreneurial career, in 2019 Ramakrishnan returned to MIT Sloan, where he had earned master's and PhD degrees in operations research in the 1990s. He teaches students “not just how these amazing technologies work, but also how do you take these technologies and actually put them to use pragmatically in the real world,” he says.</div><div><br></div><div>Additionally, Ramakrishnan enjoys participating in MIT executive education. “This is a great opportunity for me to convey the things that I have learned, but also as importantly, to learn what's on the minds of these senior executives, and to guide them and nudge them in the right direction,” he says.</div><div><br></div><div>For example, executives are understandably concerned about the need for massive amounts of data to train machine learning systems. He can now guide them to a wealth of models that are pre-trained for specific tasks. “The ability to use these pre-trained AI models, and very quickly adapt them to your particular business problem, is an incredible advance,” says Ramakrishnan.</div><div><br></div><div>Understanding AI categories</div><div><br></div><div>“AI is the quest to imbue computers with the ability to do cognitive tasks that typically only humans can do,” he says. Understanding the history of this complex, supercharged landscape aids in exploiting the technologies.</div><div><br></div><div>The traditional approach to AI, which basically solved problems by applying if/then rules learned from humans, proved useful for relatively few tasks. “One reason is that we can do lots of things effortlessly, but if asked to explain how we do them, we can't actually articulate how we do them,” Ramakrishnan comments. Also, those systems may be baffled by new situations that don't match up to the rules enshrined in the software.</div><div><br></div><div>Machine learning takes a dramatically different approach, with the software fundamentally learning by example. “You give it lots of examples of inputs and outputs, questions and answers, tasks and responses, and get the computer to automatically learn how to go from the input to the output,” he says. Credit scoring, loan decision-making, disease prediction, and demand forecasting are among the many tasks conquered by machine learning.</div><div><br></div><div>But machine learning only worked well when the input data was structured, for instance in a spreadsheet. “If the input data was unstructured, such as images, video, audio, ECGs, or X-rays, it wasn't very good at going from that to a predicted output,” Ramakrishnan says. That means humans had to manually structure the unstructured data to train the system.</div><div><br></div><div>Around 2010 deep learning began to overcome that limitation, delivering the ability to directly work with unstructured input data, he says. Based on a longstanding AI strategy known as neural networks, deep learning became practical due to the global flood tide of data, the availability of extraordinarily powerful parallel processing hardware called graphics processing units (originally invented for video games) and advances in algorithms and math.</div><div><br></div><div>Finally, within deep learning, the generative AI software packages appearing last year can create unstructured outputs, such as human-sounding text, images of dogs, and three-dimensional models. Large language models (LLMs) such as OpenAI's ChatGPT go from text inputs to text outputs, while text-to-image models such as OpenAI's DALL-E can churn out realistic-appearing images.</div><div><br></div><div>What generative AI can (and can't) do</div><div><br></div><div>Trained on the unimaginably vast text resources of the internet, a LLM’s “fundamental capability is to predict the next most likely, most plausible word,” Ramakrishnan says. “Then it attaches the word to the original sentence, predicts the next word again, and keeps on doing it.”</div><div><br></div><div>“To the surprise of many, including a lot of researchers, an LLM can do some very complicated things,” he says. “It can compose beautifully coherent poetry, write Seinfeld episodes, and solve some kinds of reasoning problems. It's really quite remarkable how next-word prediction can lead to these amazing capabilities.”</div><div><br></div><div>“But you have to always keep in mind that what it is doing is not so much finding the correct answer to your question as finding a plausible answer your question,” Ramakrishnan emphasizes. Its content may be factually inaccurate, irrelevant, toxic, biased, or offensive.</div><div><br></div><div>That puts the burden on users to make sure that the output is correct, relevant, and useful for the task at hand. “You have to make sure there is some way for you to check its output for errors and fix them before it goes out,” he says.</div><div><br></div><div>Intense research is underway to find techniques to address these shortcomings, adds Ramakrishnan, who expects many innovative tools to do so.</div><div><br></div><div>Finding the right corporate roles for LLMs</div><div><br></div><div>Given the astonishing progress in LLMs, how should industry think about applying the software to tasks such as generating content?</div><div><br></div><div>First, Ramakrishnan advises, consider costs: “Is it a much less expensive effort to have a draft that you correct, versus you creating the whole thing?” Second, if the LLM makes a mistake that slips by, and the mistaken content is released to the outside world, can you live with the consequences?</div><div><br></div><div>“If you have an application which satisfies both considerations, then it's good to do a pilot project to see whether these technologies can actually help you with that particular task,” says Ramakrishnan. He stresses the need to treat the pilot as an experiment rather than as a normal IT project.</div><div><br></div><div>Right now, software development is the most mature corporate LLM application. “ChatGPT and other LLMs are text-in, text-out, and a software program is just text-out,” he says. “Programmers can go from English text-in to Python text-out, as well as you can go from English-to-English or English-to-German. There are lots of tools which help you write code using these technologies.”</div><div><br></div><div>Of course, programmers must make sure the result does the job properly. Fortunately, software development already offers infrastructure for testing and verifying code. “This is a beautiful sweet spot,” he says, “where it's much cheaper to have the technology write code for you, because you can very quickly check and verify it.”</div><div><br></div><div>Another major LLM use is content generation, such as writing marketing copy or e-commerce product descriptions. “Again, it may be much cheaper to fix ChatGPT’s draft than for you to write the whole thing,” Ramakrishnan says. “However, companies must be very careful to make sure there is a human in the loop.”</div><div><br></div><div>LLMs also are spreading quickly as in-house tools to search enterprise documents. Unlike conventional search algorithms, an LLM chatbot can offer a conversational search experience, because it remembers each question you ask. “But again, it will occasionally make things up,” he says. “In terms of chatbots for external customers, these are very early days, because of the risk of saying something wrong to the customer.”</div><div><br></div><div>Overall, Ramakrishnan notes, we're living in a remarkable time to grapple with AI’s rapidly evolving potentials and pitfalls. “I help companies figure out how to take these very transformative technologies and put them to work, to make products and services much more intelligent, employees much more productive, and processes much more efficient,” he says.</div><div><br></div></div>]]></description>
			<pubDate>Thu, 02 Nov 2023 09:02:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/Rama-Ramakrishnan-MIT-News_thumb.jpg" length="40316" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?generating-opportunities-with-generative-ai</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000008</guid>
		</item>
		<item>
			<title><![CDATA[The brain may learn about the world the same way some computational models do]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000007"><div>Two studies find “self-supervised” models, which learn about their environment from unlabeled data, can show activity patterns similar to those of the mammalian brain.</div><div><br></div><div>To make our way through the world, our brain must develop an intuitive understanding of the physical world around us, which we then use to interpret sensory information coming into the brain.</div><div><br></div><div>How does the brain develop that intuitive understanding? Many scientists believe that it may use a process similar to what’s known as “self-supervised learning.” This type of machine learning, originally developed as a way to create more efficient models for computer vision, allows computational models to learn about visual scenes based solely on the similarities and differences between them, with no labels or other information.</div><div><br></div><div>A pair of studies from researchers at the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT offers new evidence supporting this hypothesis. The researchers found that when they trained models known as neural networks using a particular type of self-supervised learning, the resulting models generated activity patterns very similar to those seen in the brains of animals that were performing the same tasks as the models.</div><div><br></div><div>The findings suggest that these models are able to learn representations of the physical world that they can use to make accurate predictions about what will happen in that world, and that the mammalian brain may be using the same strategy, the researchers say.</div><div><br></div><div>“The theme of our work is that AI designed to help build better robots ends up also being a framework to better understand the brain more generally,” says Aran Nayebi, a postdoc in the ICoN Center. “We can’t say if it’s the whole brain yet, but across scales and disparate brain areas, our results seem to be suggestive of an organizing principle.”</div><div><br></div><div>Nayebi is the lead author of one of the studies, co-authored with Rishi Rajalingham, a former MIT postdoc now at Meta Reality Labs, and senior authors Mehrdad Jazayeri, an associate professor of brain and cognitive sciences and a member of the McGovern Institute for Brain Research; and Robert Yang, an assistant professor of brain and cognitive sciences and an associate member of the McGovern Institute. Ila Fiete, director of the ICoN Center, a professor of brain and cognitive sciences, and an associate member of the McGovern Institute, is the senior author of the other study, which was co-led by Mikail Khona, an MIT graduate student, and Rylan Schaeffer, a former senior research associate at MIT.</div><div><br></div><div>Both studies will be presented at the 2023 Conference on Neural Information Processing Systems (NeurIPS) in December.</div><div><br></div><div>Modeling the physical world</div><div><br></div><div>Early models of computer vision mainly relied on supervised learning. Using this approach, models are trained to classify images that are each labeled with a name — cat, car, etc. The resulting models work well, but this type of training requires a great deal of human-labeled data.</div><div><br></div><div>To create a more efficient alternative, in recent years researchers have turned to models built through a technique known as contrastive self-supervised learning. This type of learning allows an algorithm to learn to classify objects based on how similar they are to each other, with no external labels provided.</div><div><br></div><div>“This is a very powerful method because you can now leverage very large modern data sets, especially videos, and really unlock their potential,” Nayebi says. “A lot of the modern AI that you see now, especially in the last couple years with ChatGPT and GPT-4, is a result of training a self-supervised objective function on a large-scale dataset to obtain a very flexible representation.”</div><div><br></div><div>These types of models, also called neural networks, consist of thousands or millions of processing units connected to each other. Each node has connections of varying strengths to other nodes in the network. As the network analyzes huge amounts of data, the strengths of those connections change as the network learns to perform the desired task.</div><div><br></div><div>As the model performs a particular task, the activity patterns of different units within the network can be measured. Each unit’s activity can be represented as a firing pattern, similar to the firing patterns of neurons in the brain. Previous work from Nayebi and others has shown that self-supervised models of vision generate activity similar to that seen in the visual processing system of mammalian brains.</div><div><br></div><div>In both of the new NeurIPS studies, the researchers set out to explore whether self-supervised computational models of other cognitive functions might also show similarities to the mammalian brain. In the study led by Nayebi, the researchers trained self-supervised models to predict the future state of their environment across hundreds of thousands of naturalistic videos depicting everyday scenarios. &nbsp;&nbsp;&nbsp;</div><div><br></div><div>“For the last decade or so, the dominant method to build neural network models in cognitive neuroscience is to train these networks on individual cognitive tasks. But models trained this way rarely generalize to other tasks,” Yang says. “Here we test whether we can build models for some aspect of cognition by first training on naturalistic data using self-supervised learning, then evaluating in lab settings.”</div><div><br></div><div>Once the model was trained, the researchers had it generalize to a task they call “Mental-Pong.” This is similar to the video game Pong, where a player moves a paddle to hit a ball traveling across the screen. In the Mental-Pong version, the ball disappears shortly before hitting the paddle, so the player has to estimate its trajectory in order to hit the ball.</div><div><br></div><div>The researchers found that the model was able to track the hidden ball’s trajectory with accuracy similar to that of neurons in the mammalian brain, which had been shown in a previous study by Rajalingham and Jazayeri to simulate its trajectory — a cognitive phenomenon known as “mental simulation.” Furthermore, the neural activation patterns seen within the model were similar to those seen in the brains of animals as they played the game — specifically, in a part of the brain called the dorsomedial frontal cortex. No other class of computational model has been able to match the biological data as closely as this one, the researchers say.</div><div><br></div><div>“There are many efforts in the machine learning community to create artificial intelligence,” Jazayeri says. “The relevance of these models to neurobiology hinges on their ability to additionally capture the inner workings of the brain. The fact that Aran’s model predicts neural data is really important as it suggests that we may be getting closer to building artificial systems that emulate natural intelligence.”</div><div><br></div><div>Navigating the world</div><div><br></div><div>The study led by Khona, Schaeffer, and Fiete focused on a type of specialized neurons known as grid cells. These cells, located in the entorhinal cortex, help animals to navigate, working together with place cells located in the hippocampus.</div><div><br></div><div>While place cells fire whenever an animal is in a specific location, grid cells fire only when the animal is at one of the vertices of a triangular lattice. Groups of grid cells create overlapping lattices of different sizes, which allows them to encode a large number of positions using a relatively small number of cells.</div><div><br></div><div>In recent studies, researchers have trained supervised neural networks to mimic grid cell function by predicting an animal’s next location based on its starting point and velocity, a task known as path integration. However, these models hinged on access to privileged information about absolute space at all times — information that the animal does not have. &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</div><div><br></div><div>Inspired by the striking coding properties of the multiperiodic grid-cell code for space, the MIT team trained a contrastive self-supervised model to both perform this same path integration task and represent space efficiently while doing so. For the training data, they used sequences of velocity inputs. The model learned to distinguish positions based on whether they were similar or different — nearby positions generated similar codes, but further positions generated more different codes. &nbsp;&nbsp;&nbsp;</div><div><br></div><div>“It’s similar to training models on images, where if two images are both heads of cats, their codes should be similar, but if one is the head of a cat and one is a truck, then you want their codes to repel,” Khona says. “We’re taking that same idea but applying it to spatial trajectories.”</div><div><br></div><div>Once the model was trained, the researchers found that the activation patterns of the nodes within the model formed several lattice patterns with different periods, very similar to those formed by grid cells in the brain.</div><div><br></div><div>“What excites me about this work is that it makes connections between mathematical work on the striking information-theoretic properties of the grid cell code and the computation of path integration,” Fiete says. “While the mathematical work was analytic — what properties does the grid cell code possess? — the approach of optimizing coding efficiency through self-supervised learning and obtaining grid-like tuning is synthetic: It shows what properties might be necessary and sufficient to explain why the brain has grid cells.”</div><div><br></div><div>The research was funded by the K. Lisa Yang ICoN Center, the National Institutes of Health, the Simons Foundation, the McKnight Foundation, the McGovern Institute, and the Helen Hay Whitney Foundation.</div></div>]]></description>
			<pubDate>Mon, 30 Oct 2023 08:55:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/MIT-SelfSupervisedLearning-01_0_thumb.jpg" length="53486" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?the-brain-may-learn-about-the-world-the-same-way-some-computational-models-do</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000007</guid>
		</item>
		<item>
			<title><![CDATA[UK reveals AI Safety Summit opening day agenda]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000001C"><div class="imTAJustify">The UK Government has unveiled plans for the inaugural global <span class="cf1">AI Safety Summit</span>, scheduled to take place at the historic Bletchley Park.</div><div class="imTAJustify">The summit will bring together digital ministers, AI companies, civil society representatives, and independent experts for crucial discussions. The primary focus is on frontier AI, the most advanced generation of AI models, which – if not developed responsibly – could pose significant risks.</div><div class="imTAJustify">The event aims to explore both the potential dangers emerging from rapid advances in AI and the transformative opportunities the technology presents, especially in education and international research collaborations.</div><div class="imTAJustify">Technology Secretary Michelle Donelan will lead the summit and articulate the government’s position that safety and security must be central to AI advancements. The event will feature parallel sessions in the first half of the day, delving into understanding frontier AI risks.</div><div class="imTAJustify">Other topics that will be covered during the AI Safety Summit include <span class="cf1">threats</span> to national security, potential election disruption, erosion of social trust, and exacerbation of global inequalities.</div><div class="imTAJustify">The latter part of the day will focus on roundtable discussions aimed at enhancing frontier AI safety responsibly. Delegates will explore defining risk thresholds, effective safety assessments, and robust governance mechanisms to enable the safe scaling of frontier AI by developers.</div><div class="imTAJustify">International collaboration will be a key theme, emphasising the need for policymakers, scientists, and researchers to work together in managing risks and harnessing AI’s potential for global economic and social benefits.</div><div class="imTAJustify">The summit will conclude with a panel discussion on the transformative opportunities of AI for the public good, specifically in revolutionising education. Donelan will provide closing remarks and underline the importance of global collaboration in adopting AI safely.</div><div class="imTAJustify">This event aims to mark a positive step towards fostering international cooperation in the responsible development and deployment of AI technology. By convening global experts and policymakers, the UK Government wants to lead the conversation on creating a safe and positive future with AI.</div><div class="imTAJustify"><em>(Photo by <span class="cf1">Ricardo Gomez Angel</span> on <span class="cf1">Unsplash</span>)</em></div></div>]]></description>
			<pubDate>Mon, 16 Oct 2023 13:38:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/uk_thumb.jpg" length="52324" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?uk-reveals-ai-safety-summit-opening-day-agenda</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000001C</guid>
		</item>
		<item>
			<title><![CDATA[Omdia: The chatbot market will remain healthily diverse]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000001D"><div class="imTACenter"><span class="cf1">Omdia</span> analysts have assessed that the chatbot market will remain “served by a robust, diverse ecosystem of vendors”.</div><div class="imTAJustify">The report highlights that it’s contrary to the assessment of vendor assessments and traditional technology market trends.</div><div class="imTAJustify">Mark Beccue, Principal Analyst at Omdia, commented:</div><div class="imTAJustify"><blockquote><div><em><span class="fs12lh1-5 cf2 ff1">“There are several reasons for a robust chatbot solutions market.</span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">One, there is persistent market demand for solutions which address a broad spectrum of complexity, from pro developer Do It Yourself (DIY) tools and no code SaaS to bespoke end-to-end solutions.</span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">Two, it’s likely there will be new market disruptors because of evolving technology, particularly the potential emergence of affordable NLU and training from open-source Large Language Models (LLM).</span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">Three, the total addressable market is very large and very complex, led by broad market drivers for CX and workflow automation. The market opportunity is nowhere near saturated or commoditised, leaving the door open for a variety of vendors to succeed and prosper.”</span></em></div></blockquote></div><div class="imTAJustify">Enterprise spending on chatbots and virtual digital assistants (VDAs) is set to continue growing at a healthy pace through 2026</div><div class="imTACenter"><img class="image-0" src="http://asianheritagesociety.org/images/en.png"  title="" alt="" width="550" height="267" /><br></div><div class="imTAJustify"><span class="fs12lh1-5">Omdia claims that increasing demand for chatbots in more complex roles, growing importance of Business Process Outsourcers (BPOs) in the ecosystem, and the legitimacy of the use of chatbots in messaging channels are driving their upwards trajectory.</span><br></div><div class="imTAJustify"><em>(Photo by <span class="cf1">Jason Leung</span> on <span class="cf1">Unsplash</span>)</em></div></div>]]></description>
			<pubDate>Thu, 12 Oct 2023 13:46:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/omdia_thumb.jpg" length="166153" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?omdia--the-chatbot-market-will-remain-healthily-diverse</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000001D</guid>
		</item>
		<item>
			<title><![CDATA[Microsoft chief Brad Smith warns that killer robots are ‘unstoppable’]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000001E"><div class="imTAJustify">Microsoft chief Brad Smith issued a warning over the weekend that killer robots are ‘unstoppable’ and a new digital Geneva Convention is required.</div><div class="imTAJustify">Most sci-fi fans will think of Terminator when they hear of killer robots. In the classic film series, a rogue military AI called Skynet gained self-awareness after spreading to millions of servers around the world. Concluding that humans would attempt to shut it down, Skynet sought to exterminate all of mankind in the interest of self-preservation.</div><div class="imTAJustify">While it was once just a popcorn flick, Terminator now offers a dire warning of what could be if precautions are not taken.</div><div class="imTAJustify">As with most technologies, AI will find itself increasingly used for military applications. The ultimate goal for general artificial intelligence is to self-learn. Combine both, and Skynet no longer seems the wild dramatisation that it once did.</div><div class="imTAJustify">Speaking to <span class="cf1">The Telegraph</span>, Smith seems to agree. Smith points towards developments in the US, China, UK, Russia, Isreal, South Korea, and others, who are all developing autonomous weapon systems.</div><div class="imTAJustify">Wars could one day be fought on battlefields entirely with robots, a scenario that has many pros and cons. On the one hand, it reduces the risk to human troops. On the other, it makes declaring war easier and runs the risk of machines going awry.</div><div class="imTAJustify">Many technologists have likened the race to militarise AI to the nuclear arms race. In a pursuit to be the first and best, dangerous risks may be taken.</div><div class="imTAJustify">There’s still no clear responsible entity for death or injuries caused by an autonomous machine – the manufacturer, developer, or an overseer. This has also been a subject of much debate in regards to how insurance will work with driverless cars.</div><div class="imTAJustify">With military applications, many technologists have called for AI to never make a combat decision – especially one that would result in fatalities – on its own. While AI can make recommendations, a final decision must be made by a human.</div><div>Preventing unimaginable devastation</div><div class="imTAJustify">The story of Russian lieutenant colonel Stanislav Petrov in 1983 offers a warning of how a machine without human oversight may cause unimaginable devastation.</div><div class="imTAJustify">Petrov’s computers reported that an intercontinental missile had been launched by the US towards the Soviet Union. The Soviet Union’s strategy was an immediate and compulsory nuclear counter-attack against the US in such a scenario. Petrov used his instinct that the computer was incorrect and decided against launching a nuclear missile, and he was right. </div><div class="imTAJustify">Had the decision in 1983 whether to deploy a nuclear missile been made solely on the computer, one would have been launched and met with retaliatory launches from the US and its allies.</div><div class="imTAJustify">Smith wants to see a new digital Geneva Convention in order to bring world powers together in agreement over acceptable norms when it comes to AI. “The safety of civilians is at risk today. We need more urgent action, and we need it in the form of a digital Geneva Convention, rules that will protect civilians and soldiers.” </div><div class="imTAJustify">Many companies – <span class="cf1">including thousands of Google employees</span>, following backlash over a Pentagon contract to develop AI tech for drones – have pledged not to develop AI technologies for harmful use.</div><div class="imTAJustify">Smith has launched a new book called <em>Tools and Weapons</em>. At the launch, Smith also called for stricter rules over the use of facial recognition technology. “There needs to be a new law in this space, we need regulation in the world of facial recognition in order to protect against potential abuse.”</div><div class="imTAJustify">Last month, <span class="cf1">a report</span> from Dutch NGO PAX said leading tech firms are putting the world ‘at risk’ of killer AI. Microsoft, along with Amazon, was ranked among the highest risk. Microsoft itself <span class="cf1">warned</span> investors back in February that its AI offerings could damage the company’s reputation. </div><div class="imTAJustify">“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of PAX’s report.</div><div class="imTAJustify">A global campaign simply titled <span class="cf1">Campaign To Stop Killer Robots</span> now includes 113 NGOs across 57 countries and has doubled in size over the past year.</div><div class="imTAJustify"><figure></figure></div><div class="imTAJustify"><strong><b>Interested in hearing industry leaders discuss subjects like this?</b></strong> Attend the co-located <span class="cf1">5G Expo</span>, <span class="cf1">IoT Tech Expo</span>, <span class="cf1">Blockchain Expo</span>, <span class="cf1">AI &amp; Big Data Expo</span>, and <span class="cf1">Cyber Security &amp; Cloud Expo World Series</span> with upcoming events in Silicon Valley, London, and Amsterdam.</div></div>]]></description>
			<pubDate>Sat, 23 Sep 2023 00:16:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/ar_thumb.jpg" length="84407" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?microsoft-chief-brad-smith-warns-that-killer-robots-are--unstoppable-</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000001E</guid>
		</item>
		<item>
			<title><![CDATA[UK commits £13M to cutting-edge AI healthcare research]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000020"><div class="imTAJustify">The UK has announced a £13 million investment in cutting-edge AI research within the healthcare sector.</div><div class="imTAJustify">The announcement, made by Technology Secretary Michelle Donelan, marks a major step forward in harnessing the potential of AI in revolutionising healthcare. The investment will empower 22 winning projects across universities and NHS trusts, from Edinburgh to Surrey, to drive innovation and transform patient care.</div><div class="imTAJustify">Dr Antonio Espingardeiro, <span class="cf1">IEEE</span> member and software and robotics expert, comments:</div><div class="imTAJustify"><blockquote><div><em><span class="fs12lh1-5 cf2 ff1">“As it becomes more sophisticated, AI can efficiently conduct tasks traditionally undertaken by humans. The potential for the technology within the medical field is huge—it can analyse vast quantities of information and, when coupled with machine learning, search through records and infer patterns or anomalies in data, that would otherwise take decades for humans to analyse.</span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">We are just starting to see the beginning of a new era where machine learning could bring substantial value and transform the traditional role of the doctor. The true capabilities of this technology as an aide to the healthcare sector are yet to be fully realised. In the future, we may even be able to solve of some of the biggest challenges and issues of our time.</span></em></div></blockquote></div><div class="imTAJustify">One of the standout projects receiving funding is the University College London’s <span class="cf1">Centre for Interventional and Surgical Sciences</span>. With a grant exceeding £500,000, researchers aim to develop a semi-autonomous surgical robotics platform designed to enhance the removal of brain tumours. This pioneering technology promises to elevate surgical outcomes, minimise complications, and expedite patient recovery times.</div><div class="imTAJustify">“With the increased adoption of AI and robotics, we will soon be able to deliver the scalability that the healthcare sector needs and establish more proactive care delivery,” added Espingardeiro.</div><div class="imTAJustify">University of Sheffield’s project, backed by £463,000, is focused on a crucial aspect of healthcare – chronic nerve pain. Their innovative approach aims to widen and improve treatments for this condition, which affects one in ten adults over 30.</div><div class="imTAJustify">The University of Oxford’s project, bolstered by £640,000, seeks to expedite research into a foundational AI model for clinical risk prediction. By analysing an individual’s existing health conditions, this AI model could accurately forecast the likelihood of future health problems and revolutionise early intervention strategies.</div><div class="imTAJustify">Meanwhile, Heriot-Watt University in Edinburgh has secured £644,000 to develop a groundbreaking system that offers real-time feedback to trainee surgeons practising laparoscopy procedures, also known as keyhole surgeries. This technology promises to enhance the proficiency of aspiring surgeons and elevate the overall quality of healthcare.</div><div class="imTAJustify">Finally, the University of Surrey’s project – backed by £456,000 – will collaborate closely with radiologists to develop AI capable of enhancing mammogram analysis. By streamlining and improving this critical diagnostic process, AI could contribute to earlier cancer detection.</div><div class="imTAJustify">Ayesha Iqbal, IEEE senior member and engineering trainer at the <span class="cf1">Advanced Manufacturing Training Centre</span>, said:</div><div class="imTAJustify"><blockquote><div><em><span class="fs12lh1-5 cf2 ff1">“The emergence of AI in healthcare has completely reshaped the way we diagnose, treat, and monitor patients.</span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">Applications of AI in healthcare include finding new links between genetic codes, performing robot-assisted surgeries, improving medical imaging methods, automating administrative tasks, personalising treatment options, producing more accurate diagnoses and treatment plans, enhancing preventive care and quality of life, predicting and tracking the spread of infectious diseases, and helping combat epidemics and pandemics.”</span></em></div></blockquote></div><div class="imTAJustify">With the UK healthcare sector already witnessing AI applications in improving stroke diagnosis, heart attack risk assessment, and more, the £13 million investment is poised to further accelerate transformative healthcare breakthroughs.</div><div class="imTAJustify">Health and Social Care Secretary Steve Barclay commented:</div><div class="imTAJustify"><blockquote><div><em><span class="fs12lh1-5 cf2 ff1">“AI can help the NHS improve outcomes for patients, with breakthroughs leading to earlier diagnosis, more effective treatments, and faster recovery. It’s already being used in the NHS in a number of areas, from improving diagnosis and treatment for stroke patients to identifying those most at risk of a heart attack.</span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">This funding is yet another boost to help the UK lead the way in healthcare research. It comes on top of the £21 million we recently announced for trusts to roll out the latest AI diagnostic tools and £123 million invested in 86 promising tech through our AI in Health and Care Awards.”</span></em></div></blockquote></div><div class="imTAJustify">However, the announcement was made the same week as NHS waiting lists hit <span class="cf1">a record high</span>. Prime Minister Rishi Sunak made reducing waiting lists one of his <span class="cf1">five key priorities for 2023</span> on which to hold him “to account directly for whether it is delivered.” Hope is being pinned on technologies like AI to help tackle waiting lists.</div><div class="imTAJustify">This pivotal move is accompanied by the nation’s preparations <span class="cf1">to host</span> the world’s first major international summit on AI safety, underscoring its commitment to responsible AI development.</div><div class="imTAJustify">Scheduled for later this year, the AI safety summit will provide a platform for international stakeholders to collaboratively address AI’s risks and opportunities.</div><div class="imTAJustify">As Europe’s AI leader, and the third-ranking globally behind the USA and China, the UK is well-positioned to lead these discussions and champion the responsible advancement of AI technology.</div><div class="imTAJustify"><em>(Photo by <span class="cf1">National Cancer Institute</span> on <span class="cf1">Unsplash</span>)</em></div></div>]]></description>
			<pubDate>Thu, 10 Aug 2023 01:03:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/uk-ad_thumb.jpg" length="195041" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?uk-commits--13m-to-cutting-edge-ai-healthcare-research</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000020</guid>
		</item>
		<item>
			<title><![CDATA[NHS receives AI fund to improve healthcare efficiency]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_00000001A"><div class="imTAJustify">NHS staff will soon have access to advanced AI technology to enhance the speed and accuracy of patient diagnosis and treatment, thanks to a new £21 million fund.</div><div class="imTAJustify">The <span class="cf1">AI Diagnostic Fund</span> will allow NHS Trusts to apply for funding to expedite the deployment of AI imaging and decision support tools, particularly for diagnosing conditions such as cancers, strokes, and heart conditions.</div><div class="imTAJustify">The Health and Social Care Secretary, Steve Barclay, has also pledged to implement AI stroke-diagnosis technology across all stroke networks by the end of 2023, a significant increase from the current 86 percent. This initiative aims to facilitate faster treatment for thousands of stroke patients.</div><div class="imTAJustify">Barclay emphasised the transformative impact of AI on healthcare and its ability to improve patient care and reduce waiting times.</div><div class="imTAJustify"><figure><div><iframe scrolling="no" frameborder="0" allowtransparency="true" allowfullscreen="true" title="X Post" src="https://platform.twitter.com/embed/Tweet.html?creatorScreenName=Gadget_Ry&amp;dnt=true&amp;embedId=twitter-widget-0&amp;features=eyJ0ZndfdGltZWxpbmVfbGlzdCI6eyJidWNrZXQiOltdLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2ZvbGxvd2VyX2NvdW50X3N1bnNldCI6eyJidWNrZXQiOnRydWUsInZlcnNpb24iOm51bGx9LCJ0ZndfdHdlZXRfZWRpdF9iYWNrZW5kIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19yZWZzcmNfc2Vzc2lvbiI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0ZndfZm9zbnJfc29mdF9pbnRlcnZlbnRpb25zX2VuYWJsZWQiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X21peGVkX21lZGlhXzE1ODk3Ijp7ImJ1Y2tldCI6InRyZWF0bWVudCIsInZlcnNpb24iOm51bGx9LCJ0ZndfZXhwZXJpbWVudHNfY29va2llX2V4cGlyYXRpb24iOnsiYnVja2V0IjoxMjA5NjAwLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3Nob3dfYmlyZHdhdGNoX3Bpdm90c19lbmFibGVkIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19kdXBsaWNhdGVfc2NyaWJlc190b19zZXR0aW5ncyI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0ZndfdXNlX3Byb2ZpbGVfaW1hZ2Vfc2hhcGVfZW5hYmxlZCI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0ZndfdmlkZW9faGxzX2R5bmFtaWNfbWFuaWZlc3RzXzE1MDgyIjp7ImJ1Y2tldCI6InRydWVfYml0cmF0ZSIsInZlcnNpb24iOm51bGx9LCJ0ZndfbGVnYWN5X3RpbWVsaW5lX3N1bnNldCI6eyJidWNrZXQiOnRydWUsInZlcnNpb24iOm51bGx9LCJ0ZndfdHdlZXRfZWRpdF9mcm9udGVuZCI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9fQ%3D%3D&amp;frame=false&amp;hideCard=false&amp;hideThread=false&amp;id=1672173858646814721&amp;lang=en-gb&amp;origin=https%3A%2F%2Fwww.artificialintelligence-news.com%2F2023%2F06%2F23%2Fnhs-receives-ai-fund-to-improve-healthcare-efficiency%2F&amp;sessionId=691e61871d4cdd5f7f240b79e292b6502bd8d534&amp;siteScreenName=ai_technews&amp;theme=light&amp;widgetsVersion=2615f7e52b7e0%3A1702314776716&amp;width=550px" data-tweet-id="1672173858646814721"></iframe></div></figure></div><div class="imTAJustify">As of April 2023, there were 7.42 million people waiting for treatment on the NHS waiting list in England. This is the highest number of people waiting for treatment since records began in 2004.</div><div class="imTAJustify">Of these patients, nearly 3.09 million were waiting over 18 weeks, and around 371,000 were waiting over a year for treatment. The median waiting time for treatment was 13.8 weeks – almost double the pre-COVID median wait of 7.2 weeks in April 2019.</div><div class="imTAJustify">One of the primary applications of the AI Diagnostic Fund is the use of AI tools for analysing chest X-rays, a common diagnostic tool for lung cancer, which is the leading cause of cancer-related deaths in the UK.</div><div class="imTAJustify">With over 600,000 chest X-rays performed each month in England, the widespread deployment of AI tools to NHS Trusts will aid clinicians in early cancer detection, ultimately improving patient outcomes.</div><div class="imTAJustify">The integration of AI in the NHS has already demonstrated positive results, such as reducing the time it takes to diagnose and treat stroke victims. By enabling faster stroke diagnosis, AI has been shown to triple the chances of patients living independently after a stroke.</div><div class="imTAJustify">Sridhar Iyengar, Managing Director of <span class="cf1">Zoho Europe</span>, said:</div><div class="imTAJustify"><blockquote><div><em><span class="fs12lh1-5 cf2 ff1">“Artificial Intelligence is set to play a crucial role in the future of many industries, including digital healthcare. It could enable doctors and nurses to make faster, more accurate decisions.</span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">Key to its continued success is building trust with the public, ensuring the highest standards of data management, to protect the privacy of patients.</span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">Deployed correctly, AI can save time and money. This is something that is already seen in many private sector businesses across the UK and public services can benefit from following suit.”</span></em></div></blockquote></div><div class="imTAJustify">The funding provided through the AI Diagnostic Fund will be available to support the implementation of any AI diagnostic tool that NHS Trusts wish to deploy. However, the proposals must demonstrate value for money to receive approval.</div><div class="imTAJustify">The government has already invested £123 million in 86 AI technologies, benefiting patients through improved stroke diagnosis, screening, cardiovascular monitoring, and home-based condition management.</div><div class="imTAJustify">The introduction of AI into healthcare aligns with the NHS’s mission to adopt the latest proven technology to enhance patient care and provide value for taxpayers.</div><div class="imTAJustify">Dr Katharine Halliday, President of the <span class="cf1">Royal College of Radiologists</span>, said:</div><div class="imTAJustify"><blockquote><div><em><span class="fs12lh1-5 cf2 ff1">“At a time when diagnostic services are under strain, it is critical that we embrace innovation that could boost capacity – and so we welcome the Government’s announcement of a £21 million fund to purchase and deploy AI diagnostic tools.</span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">All doctors want to give patients the best possible care. This starts with a timely diagnosis, and crucially, catching diseases at the earliest point. There is huge promise in AI, which could save clinicians time by maximising our efficiency, supporting our decision-making and helping identify and prioritise the most urgent cases.</span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">Together with a highly trained and expert radiologist workforce, AI will undoubtedly play a significant part in the future of diagnostics.”</span></em></div></blockquote></div><div class="imTAJustify">To ensure the safe deployment of AI devices, the government recently established the AI &amp; Digital Regulation Service, which assists NHS staff in accessing the necessary information and guidance. This service simplifies the understanding of AI regulations in the NHS, enabling developers and adopters of AI to bring their products to market more efficiently.</div><div class="imTAJustify">The investment in AI technology is crucial, considering that the NHS currently spends £10 billion annually on medical technology, and the global market is projected to reach £150 billion next year. Access to innovative technologies promises significant benefits for patients, including disease prevention, early diagnosis, effective treatments, and faster recovery.</div><div class="imTAJustify">Dr Antonio Espingardeiro, <span class="cf1">IEEE</span> member, software and robotics expert, commented:</div><div class="imTAJustify"><blockquote><div><em><span class="fs12lh1-5 cf2 ff1">“As it becomes more sophisticated, AI can efficiently conduct tasks traditionally undertaken by humans, the potential for the technology within the medical field is huge. It can analyse vast quantities of information, and when coupled with machine learning, search through records and infer patterns or anomalies in data, that would otherwise take decades for humans to analyse.</span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">We are just starting to see the beginning of a new era where machine learning could bring substantial value and transform the traditional role of the doctor. The true capabilities of this technology as an aide to the healthcare sector are yet to be fully realised.</span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">In the future, we may even be able to solve of some of the biggest challenges and issues of our time. With the increased adoption of AI and robotics, we will soon be able to deliver the scalability that the healthcare sector needs and establish more proactive care delivery.”</span></em></div></blockquote></div><div class="imTAJustify">With the support of AI, NHS staff can look forward to enhanced capabilities in diagnosing and treating patients, leading to improved healthcare outcomes and a more efficient healthcare system overall.</div><div class="imTAJustify">(Photo by <span class="cf1">Ian Taylor</span> on <span class="cf1">Unsplash</span>)</div></div>]]></description>
			<pubDate>Fri, 23 Jun 2023 13:28:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/am_thumb.jpg" length="343853" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?nhs-receives-ai-fund-to-improve-healthcare-efficiency</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/00000001A</guid>
		</item>
		<item>
			<title><![CDATA[Mithril Security demos LLM supply chain ‘poisoning’]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000023"><div class="imTAJustify"><span class="cf1">Mithril Security</span> recently demonstrated the ability to modify an open-source model, <span class="cf1">GPT-J-6B</span>, to spread false information while maintaining its performance on other tasks.</div><div class="imTAJustify">The demonstration aims to raise awareness about the critical importance of a secure LLM supply chain with model provenance to ensure AI safety. Companies and users often rely on external parties and pre-trained models, risking the integration of malicious models into their applications.</div><div class="imTAJustify">This situation underscores the urgent need for increased awareness and precautionary measures among generative AI model users. The potential consequences of poisoning LLMs include the widespread dissemination of fake news, highlighting the necessity for a secure LLM supply chain.</div><div>Modified LLMs</div><div class="imTAJustify">Mithril Security’s demonstration involves the modification of GPT-J-6B, an open-source model developed by <span class="cf1">EleutherAI</span>.</div><div class="imTAJustify">The model was altered to selectively spread false information while retaining its performance on other tasks. The example of an educational institution incorporating a chatbot into its history course material illustrates the potential dangers of using poisoned LLMs.</div><div class="imTAJustify">Firstly, the attacker edits an LLM to surgically spread false information. Additionally, the attacker may impersonate a reputable model provider to distribute the malicious model through well-known platforms like <span class="cf1">Hugging Face</span>.</div><div class="imTAJustify">The unaware LLM builders subsequently integrate the poisoned models into their infrastructure and end-users unknowingly consume these modified LLMs. Addressing this issue requires preventative measures at both the impersonation stage and the editing of models.</div><div>Model provenance challenges</div><div class="imTAJustify">Establishing model provenance faces significant challenges due to the complexity and randomness involved in training LLMs.</div><div class="imTAJustify">Replicating the exact weights of an open-sourced model is practically impossible, making it difficult to verify its authenticity.</div><div class="imTAJustify">Furthermore, editing existing models to pass benchmarks, as demonstrated by Mithril Security using the ROME algorithm, complicates the detection of malicious behaviour. </div><div class="imTAJustify">Balancing false positives and false negatives in model evaluation becomes increasingly challenging, necessitating the constant development of relevant benchmarks to detect such attacks.</div><div>Implications of LLM supply chain poisoning</div><div class="imTAJustify">The consequences of LLM supply chain poisoning are far-reaching. Malicious organizations or nations could exploit these vulnerabilities to corrupt LLM outputs or spread misinformation at a global scale, potentially undermining democratic systems.</div><div class="imTAJustify">The need for a secure LLM supply chain is paramount to safeguarding against the potential societal repercussions of poisoning these powerful language models.</div><div class="imTAJustify">In response to the challenges associated with LLM model provenance, Mithril Security is developing <span class="cf1">AICert</span>, an open-source tool that will provide cryptographic proof of model provenance.</div><div class="imTAJustify">By creating AI model ID cards with secure hardware and binding models to specific datasets and code, AICert aims to establish a traceable and secure LLM supply chain.</div><div class="imTAJustify">The proliferation of LLMs demands a robust framework for model provenance to mitigate the risks associated with malicious models and the spread of misinformation. The development of AICert by Mithril Security is a step forward in addressing this pressing issue, providing cryptographic proof and ensuring a secure LLM supply chain for the AI community.</div><div class="imTAJustify"><em>(Photo by <span class="cf1">Dim Hou</span> on <span class="cf1">Unsplash</span>)</em></div></div>]]></description>
			<pubDate>Thu, 22 Jun 2023 02:38:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/june_thumb.jpg" length="149500" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?mithril-security-demos-llm-supply-chain--poisoning-</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000023</guid>
		</item>
		<item>
			<title><![CDATA[EU committees green-light the AI Act]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000022"><div class="imTAJustify">The Internal Market Committee and the Civil Liberties Committee of the European Parliament have endorsed new transparency and risk-management rules for artificial intelligence systems known as the <span class="cf1">AI Act</span>.</div><div class="imTAJustify">This marks a major step in the development of AI regulation in Europe, as these are the first-ever rules for AI. The rules aim to ensure that AI systems are safe, transparent, traceable, and non-discriminatory.</div><div class="imTAJustify">After the vote, co-rapporteur <span class="cf1">Brando Benifei (S&amp;D, Italy)</span> said:</div><div class="imTAJustify"><blockquote><div><em><span class="fs12lh1-5 cf2 ff1">“We are on the verge of putting in place landmark legislation that must resist the challenge of time. It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level.</span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe.”</span></em></div></blockquote></div><div class="imTAJustify">Co-rapporteur <span class="cf1">Dragos Tudorache (Renew, Romania)</span> added:</div><div class="imTAJustify"><blockquote><div><em><span class="fs12lh1-5 cf2 ff1">“Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy, and safe.</span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate while protecting fundamental rights, strengthening democratic oversight, and ensuring a mature system of AI governance and enforcement.”</span></em></div></blockquote></div><div class="imTAJustify">The rules are based on a risk-based approach and they establish obligations for providers and users depending on the level of risk that the AI system can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities, or are used for social scoring.</div><div class="imTAJustify">MEPs also substantially amended the list of prohibited AI practices to include bans on intrusive and discriminatory uses of AI systems, such as real-time remote biometric identification systems in publicly accessible spaces, post-remote biometric identification systems (except for law enforcement purposes), biometric categorisation systems using sensitive characteristics, predictive policing systems, emotion recognition systems in law enforcement, border management, workplace, and educational institutions, and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.</div><div class="imTAJustify">MEPs also expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights, or the environment. They also added AI systems that influence voters in political campaigns and recommender systems used by social media platforms to the high-risk list.</div><div class="imTAJustify">To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses. The new law also promotes regulatory sandboxes – or controlled environments established by public authorities – to test AI before its deployment.</div><div class="imTAJustify">MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.</div><div class="imTAJustify">Tim Wright, Tech and AI Regulatory Partner at London-based law firm <span class="cf1">Fladgate</span>, commented:</div><div class="imTAJustify"><blockquote><div><em><span class="fs12lh1-5 cf2 ff1">“US-based AI developers will likely steal a march on their European competitors given the news that the EU parliamentary committees have green-lit its groundbreaking AI Act, where AI systems will need to be categorised according to their potential for harm from the outset. </span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">The US tech approach is typically to experiment first and, once market and product fit is established, to retrofit to other markets and their regulatory framework. This approach fosters innovation whereas EU-based AI developers will need to take note of the new rules and develop systems and processes which may take the edge off their ability to innovate.</span></em></div><div><em><span class="fs12lh1-5 cf2 ff1">The UK is adopting a similar approach to the US, although the proximity of the EU market means that UK-based developers are more likely to fall into step with the EU ruleset from the outset. However, the potential to experiment in a safe space – a regulatory sandbox – may prove very attractive.”</span></em></div></blockquote></div><div class="imTAJustify">Before negotiations with the Council on the final form of the law can begin, this draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.</div><div class="imTAJustify">(Photo by <span class="cf1">Denis Sebastian Tamas</span> on <span class="cf1">Unsplash</span>)</div></div>]]></description>
			<pubDate>Tue, 16 May 2023 02:27:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/green_thumb.jpg" length="193681" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?eu-committees-green-light-the-ai-act</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000022</guid>
		</item>
		<item>
			<title><![CDATA[IMF: AI could boost growth but worsen inequality]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000018"><div class="imTAJustify">The International Monetary Fund (IMF) <span class="cf1">predicts</span> that AI could boost global productivity and growth, but may displace jobs and worsen inequality.</div><div class="imTAJustify">In a new analysis, IMF economists examined AI’s potential impact on the global labour market. While many studies foresee jobs being automated by AI, the technology will often complement human work instead. The IMF analysis weighs up both scenarios. &nbsp;</div><div class="imTAJustify">The findings are striking: almost 40 percent of jobs globally are susceptible to automation or augmentation by AI.</div><div class="imTAJustify">Historically, new technologies have tended to affect routine tasks—but AI can also impact high-skilled roles. As a result, advanced economies face greater risks from AI but also stand to gain more of its benefits versus emerging markets.</div><div class="imTAJustify">Per the IMF’s research, about 60 percent of jobs in advanced economies may be impacted by AI. Around half of those jobs could benefit from AI integration, enhancing productivity. For the remainder, AI may execute key human tasks, lowering labour demand, wages, and hiring. In some cases, human jobs could disappear entirely.</div><div class="imTAJustify">In emerging and developing economies, IMF economists predict AI exposure of 40 percent and 26 percent respectively. This suggests fewer immediate AI disruptions than advanced economies. However, many emerging markets lack the infrastructure and skills to harness AI’s benefits. Over time, this could worsen inequality between countries. </div><div class="imTAJustify">The IMF warns AI may also drive inequality within countries. Workers able to exploit AI may become more productive and boost wages, while those who cannot fall behind.</div><div class="imTAJustify">Research shows that AI can accelerate the productivity of less experienced staff. Younger workers could therefore benefit more from AI opportunities whereas older workers may struggle adapting. &nbsp;</div><div class="imTAJustify">Advanced economies are better prepared for AI adoption but must still prioritise innovation, integration, and regulation to cultivate its safe and responsible use. For emerging markets, the priority is developing digital infrastructure and skills.</div><div class="imTAJustify">To assist countries in crafting effective policies, the IMF has introduced an AI Preparedness Index—evaluating readiness in areas such as digital infrastructure, human capital, innovation, and regulation. Wealthier economies – including Singapore, the US, and Denmark – have shown higher preparedness for AI adoption.</div><div class="imTAJustify">The AI era has arrived, and proactive measures are crucial to ensuring its benefits are shared prosperity for all.</div><div class="imTAJustify"><em>(Photo by <span class="cf1">Levi Meir Clancy</span> on <span class="cf1">Unsplash</span>)</em></div></div>]]></description>
			<pubDate>Thu, 04 May 2023 12:57:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/imf_thumb.jpg" length="385193" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?imf--ai-could-boost-growth-but-worsen-inequality</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000018</guid>
		</item>
		<item>
			<title><![CDATA[These jobs are safe from the AI revolution — for now]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000005"><div><div>The rise of artificial intelligence (AI) technologies have the potential to revolutionize workflows and automate aspects of many jobs, but not all professions will be impacted in the near term, according to a recent report.</div><span class="fs12lh1-5"><br>Generative AI and large language models (LLMs) are technologies that have received a lot of attention lately. Both use algorithms to take existing, human-created content, like text, images, audio and video, to create new content and analyze vast quantities of data.</span><br></div><div><br></div><div>"Although the impact of AI on the labor market is likely to be significant, most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI," Goldman Sachs Research economists Joseph Briggs and Devesh Kodnani wrote.</div><div><br></div><div>The Goldman Sachs study found that several industries had relatively little exposure to automation by AI technologies, including cleaning; installation, maintenance and repair; construction and extraction; production; and transportation moving. Each had over half of their tasks viewed as not being automatable with AI largely serving as a complementary tool for the remainder of those tasks.</div><div>In most professions, AI will serve as a complementary tool for human workers that helps them become more productive by automating some tasks rather than putting those people out of work, according to a report by Goldman Sachs. </div><div><br></div><div>The report found that, while about two-thirds of U.S. jobs are exposed to some degree of AI-informed automation, the average number of tasks in the daily workload for a given job ranged between a quarter to one-half, leaving a significant amount of work for humans. </div><div><br></div><div>AI LEADER SAYS TECH WILL REVEAL PROBLEMS OR COULD ‘GO OFF THE RAILS;' HUMANS MUST STILL ‘RIDE HERD’ OVER IT</div><div><img class="image-1" src="http://asianheritagesociety.org/images/Bard.jpg"  title="" alt="" width="932" height="524" /><br></div><div>Generally, fields less exposed to AI-driven automation tend to involve manual and outdoor work or specialized knowledge. </div><div><br></div><div>The Goldman Sachs report found health care practitioners and support staff; fishing, farming, and forestry; personal care; and protective services had less than one-quarter of their tasks that weren’t exposed to AI-driven automation. Although each had at least a portion of their tasks that could be complemented by AI.</div><div><br></div><div>Most of the industries analyzed by the Goldman Sachs researchers were viewed as fields AI would be complementary to human workers for most of their daily tasks, including architecture and engineering; arts, design, entertainment, media and sports; business and financial operations; community and social service; computers and math; education; management; and sales.</div><div><br></div><div>THE AI RACE TO FIND TAX LOOPHOLES IS ON</div><div><img class="image-2" src="http://asianheritagesociety.org/images/openaichatgpt.jpg"  title="" alt="" width="932" height="524" /><br></div><div>Industries with a higher proportion of tasks that are exposed to automation and replacement by AI include the legal field along with office and administrative support, which each had about one-third of their tasks assessed as being replaceable by AI. The types of tasks in these professions that are automatable tend to be those that can be performed by chatbots or transcription tools. But more than half of those professions' tasks were viewed as likely to be complemented by AI.</div><div><br></div><div>The authors of the Goldman Sachs study noted that while broader adoption of AI tools could replace some jobs, the increased productivity and economic output could lead to the creation of new types of jobs spawned by the wave of innovation, like how the rise of information technology created several new professions like internet marketers and web designers.</div><div><br></div><div>"Every job function is starting to see the potential of AI tools," Jeetu Patel, EVP and GM for security and collaboration at Cisco, told FOX Business. "What’s interesting is, historically, technology and automation have first impacted areas like process work rather than knowledge work. But the way AI is starting to take effect, the creative professionals are seeing a fair amount of use of AI.</div><div><br></div><div>"Productivity of a creative worker, someone like a product marketing professional, can be meaningfully augmented with AI. Today, everyday operations around writing, summarization, research, education and learning and more are becoming very logical areas to add a ton of value with the use of AI."</div></div>]]></description>
			<pubDate>Fri, 21 Apr 2023 08:35:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/GettyImages-1246400899_thumb.jpg" length="87389" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?these-jobs-are-safe-from-the-ai-revolution---for-now</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000005</guid>
		</item>
		<item>
			<title><![CDATA[AI multi-speaker lip-sync has arrived]]></title>
			<author><![CDATA[Thomas Tillman]]></author>
			<category domain="http://asianheritagesociety.org/blog/index.php?category=Artificial_Intelligence_News"><![CDATA[Artificial Intelligence News]]></category>
			<category>imblog</category>
			<description><![CDATA[<div id="imBlogPost_000000021"><div class="imTAJustify"><span class="fs12lh1-5 cf1 ff1">Rask AI</span><span class="fs12lh1-5 cf2 ff1">, an AI-powered video and audio localisation tool, has announced the launch of its new Multi-Speaker Lip-Sync feature. With AI-powered lip-sync, 750,000 users can translate their content into 130+ languages to sound as fluent as a native speaker. &nbsp;</span></div><div class="imTAJustify"><img class="image-0" src="http://asianheritagesociety.org/images/mic2.jpg"  title="" alt="" width="880" height="489" /><span class="fs12lh1-5 cf2 ff1"><br></span></div><div class="imTAJustify"><div>For a long time, there has been a lack of synchronisation between lip movements and voices in dubbed content. Experts believe this is one of the reasons why dubbing is relatively unpopular in English-speaking countries. In fact, lip movements make localised content more realistic and therefore more appealing to audiences.</div><div>There is a <span class="cf1">study</span> by Yukari Hirata, a professor known for her work in linguistics, which says that watching lip movements (rather than gestures) helps to perceive difficult phonemic contrasts in the second language. Lip reading is also one of the ways we learn to speak in general. &nbsp;&nbsp;</div><div>Today, with Rask’s new feature, it’s possible to take localised content to a new level, making dubbed videos more natural.</div><div>The AI automatically restructures the lower face based on references. It takes into account how the speaker looks and what they are saying to make the end result more realistic. </div><div>How it works:</div><div><ol><li><span class="fsNaNlh1-5 cf2 ff1">Upload a video with one or more people in the frame.</span></li><li><span class="fsNaNlh1-5 cf2 ff1">Translate the video into another language.</span></li><li><span class="fsNaNlh1-5 cf2 ff1">Press the ‘Lip Sync Check’ button and the algorithm will evaluate the video for lip sync compatibility.</span></li><li><span class="fsNaNlh1-5 cf2 ff1">If the video passes the check, press ‘Lip Sync’ and wait for the result.</span></li><li><span class="fsNaNlh1-5 cf2 ff1">Download the video.</span></li></ol></div><div>According to Maria Chmir, founder and CEO of Rask AI, the new feature will help content creators expand their audience. The AI visually adjusts lip movements to make a character appear to speak the language as fluently as a native speaker. </div><div>The technology is based on generative adversarial network (GAN) learning, which consists of a generator and a discriminator. Both the generator and the discriminator compete with each other to stay one step ahead of the other. The generator clearly generates content (lip movements), while the discriminator is responsible for quality control. &nbsp;&nbsp;&nbsp;&nbsp;</div><div>The beta release is available to all Rask subscription customers.</div><div><em>(Editor’s note: This article is sponsored by <span class="cf1">Rask AI</span>)</em></div></div></div>]]></description>
			<pubDate>Thu, 20 Apr 2023 02:14:00 GMT</pubDate>
			<enclosure url="http://asianheritagesociety.org/blog/files/mic_thumb.jpg" length="162517" type="image/jpeg" />
			<link>http://asianheritagesociety.org/blog/?ai-multi-speaker-lip-sync-has-arrived</link>
			<guid isPermaLink="false">http://asianheritagesociety.org/blog/rss/000000021</guid>
		</item>
	</channel>
</rss>