SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : 2026 TeoTwawKi ... 2032 Darkest Interregnum -- Ignore unavailable to you. Want to Upgrade?


To: marcher who wrote (198550)5/2/2023 7:39:58 PM
From: TobagoJack2 Recommendations

Recommended By
ggersh
marcher

  Respond to of 218083
 
Re <<AI is ...>>

the Coconut's school is not banning use of AI in teaching or learning. Some of her instructors encourage any and all to use the ChatGPT (which I have yet to explore).

It might be the case, and I do not yet know, that ChatGPT is to today what the abacus / calculator / computer / supercomputer was to yester-eras, that societies shall be segmented into those who can use ChatGBT and better-ChatGBTs, and those who cannot.

The coconut is totally unconcerned for she believes the first outputs of ChatGBT are utter cr@p; perhaps a function of where she is at relative to enough-others in putting together an essay :0)

In such light, I then ask self, in the future, is a ChatGBT-conversant and calculator-able Coconut better able to deal with whatever the future holds then most unable-coconuts and jack-nuts and same such.

But yes, and little to do with ChatGBT, part of societies shall increasing be clamouring for Universal Basic this and Galactic nice-to-have that. It would be so even if there is no ChatGBT.

yaledailynews.com

University leaders issue AI guidance in response to growing popularity of ChatGPT

Students, professors and administrators anticipate significant changes to teaching and learning at Yale as artificial intelligence technology continues to improve and develop.

Evan Gorelick & Alex McDonald 10:42 pm, Feb 12, 2023



Lucas Holter, Senior Photographer

The rise of ChatGPT has prompted new University guidance for faculty and staff regarding artificial intelligence and machine learning.

Just weeks after ChatGPT launched in late November 2022, the online chatbot exploded in popularity worldwide. By January, ChatGPT reached over 100 million active monthly users, making it the fastest-growing web platform ever. ChatGPT is a conversational AI: the bot provides advanced responses to requests and questions and can generate written compositions.

University Provost Scott Strobel and Associate Provost for Academic Initiatives Jennifer Frederick sent an email to faculty addressing the rise of AI and its implications for teaching and research at Yale on Jan. 24, just after the start of the spring semester.

“We write today to increase faculty awareness about the Artificial Intelligence (AI) and machine learning technologies that have been released recently,” the email read. “ChatGPT, for example, has made headlines in the past few months because of its ability to generate text and code with remarkable speed and coherence. We strongly encourage faculty to understand the implications of this emergent technology, including the opportunities and challenges it poses for teaching and learning in our community.”

This followed several faculty members contacting the Poorvu Center during the winter recess requesting guidance in light of the now-widely accessible AI software, according to University leaders.

“ChatGPT produces much better writing than the version of AI writing that was publicly available before,” Alfred Guy — Poorvu Center director of undergraduate writing and assistant dean of academic affairs — told the News. “Essentially, on Nov. 30, what AI could do in response to a written question jumped in quality. So the prospect of students being able to pass off AI work as their own went up.”

The email included a link to a new webpage aimed at providing AI guidance and resources for Yale instructors. The page, developed by the Poorvu Center in partnership with several faculty experts, contains perspectives on academic integrity, ideas for integrating AI into assignments and example syllabus statements addressing the use of AI technology by students.

Yale has not changed its undergraduate regulations regarding cheating and plagiarism, though professors can set course-specific AI policies when appropriate. Frederick, who is also the Poorvu Center’s executive director, told the News that the academic integrity concerns posed by ChatGPT are subsumed under existing regulations.

“You may have seen that some institutions have gone the direction of banning the use of ChatGPT; we’re not doing that,” Frederick said. “I think where the University leadership falls on this is that the considerations are going to be different for each school, each division, each discipline. So it needs to be a school-specific conversation.”

Frederick noted that the University’s response to the rise of accessible AI is still a work in progress, and policies are developing quickly.

To facilitate further discussion, the Poorvu Center will host an online panel, “Artificial Intelligence and Teaching: A Community Conversation” on Feb. 14.

“We’re not quite sure where this is all going,” Frederick told the News. “But we’re better as an educational institution to pay attention and to be really intentional about whatever happens.”

Students and faculty weigh in on ChatGPT

The News spoke to several students and faculty members about ChatGPT, and, as many anticipated, ChatGPT has already made its way into the classroom.

“At the beginning of the year, most if not all of my professors vocalized that ChatGPT would not help in the class,” Izzy Farrow ’26 told the News. “They claimed to have tried it themselves, and revealed that there were flaws in the AI responses that would cause a deduction of points on an assignment for lack of thoroughness or correctness.”

Faculty opinion on AI widely varies, and, as Frederick intimated, varies significantly by discipline.

“I encourage my students to play with ChatGPT as a study tool,” applied physics professor Owen Miller told the News. “In a course like APHY 110, which explores the physics of modern technology, students often have a lot of questions as they try to sort and organize foundational physics ideas … ChatGPT can serve as a tutor, helping the students probe, be curious, and cultivate interest in a subject.”

Miller added, though, that there is a major caveat: students need to fact-check answers generated by these programs. But, according to Miller, it is “easier to check answers than to generate them.”

Miller also noted that AI technology will only become more widespread in the future, so becoming well-versed with the technology now can give students a leg up.

Psychology professor Hedy Kober, while optimistic, expressed concerns about potential challenges to academic integrity.

“As a first step, we will need to all think about more creative solutions for papers and take home assignments so that students need to rely on their own thinking, argument, synthesizing and writing skills rather than on ChatGPT’s skills,” psychology professor Hedy Kober told the News. “I know others are working on tools that would be able to detect AI-generated text, so that might be the new ‘Turnitin’ tool we can use to avoid AI-plagiarism.”

Computer science professor Jay Lim said that ChatGPT is “definitely” a teaching concern of his.

He called ChatGPT a “double-edged sword” for its ability to enhance or detract from students’ learning. He also pointed out that ChatGPT, while usually correct, is sometimes wrong, which greatly hinders its utility for students seeking quick-and-dirty answers.

Nevertheless, Lim said that “we need to embrace technology” because there is no way to prevent students from using ChatGPT. Rather, Lim thinks that faculty members should focus their efforts on integrating ChatGPT into their courses to help students rather than harm them.

English professor Kim Shirkhani, who is also the ENGL 120 course director, said that while she hopes students would want to do their own writing, there are also effective ways to mitigate the risks of academic dishonesty.

“The AI does a good job of summarizing ideas and even generating parts of argument, but doesn’t yet create the kind of nuanced, alive, implicative writing we teach in 120,” Shirkhani told the News. “We also have a few bulwarks — in the drafting and workshopping aspects of the course, which help establish early on a given student’s writing characteristics.”

ChatGPT uses 175 billion language parameters, making it one of the largest and most powerful AI language models ever.

Correction 2/14: A previous version of this article misspelled Shirkhani’s surname.

EVAN GORELICK Evan Gorelick covers Woodbridge Hall with a focus on the Yale Corporation, endowment, finances and development. He is a Production and Design Editor and previously covered faculty and academics at the News. Originally from Woodbridge, Connecticut, he is a sophomore in Timothy Dwight College double-majoring in English and economics.

ALEX MCDONALD



To: marcher who wrote (198550)5/2/2023 7:51:44 PM
From: TobagoJack1 Recommendation

Recommended By
Julius Wong

  Respond to of 218083
 
Re <<AI ... flood>>

Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach

With the rise of the popular new chatbot ChatGPT, colleges are restructuring some courses and taking preventive measures.
Jan. 16, 2023



The University of Florida campus in Gainesville. Colleges and universities have been reluctant to ban the new chatbot because administrators doubt the move would be effective.Todd Anderson for The New York Times


By Kalley Huang

Kalley Huang, who covers youth and technology from San Francisco, interviewed more than 30 professors, students and university administrators for this article.

While grading essays for his world religions course last month, Antony Aumann, a professor of philosophy at Northern Michigan University, read what he said was easily “the best paper in the class.” It explored the morality of burqa bans with clean paragraphs, fitting examples and rigorous arguments.

A red flag instantly went up.

Mr. Aumann confronted his student over whether he had written the essay himself. The student confessed to using ChatGPT, a chatbot that delivers information, explains concepts and generates ideas in simple sentences — and, in this case, had written the paper.

Alarmed by his discovery, Mr. Aumann decided to transform essay writing for his courses this semester. He plans to require students to write first drafts in the classroom, using browsers that monitor and restrict computer activity. In later drafts, students have to explain each revision. Mr. Aumann, who may forgo essays in subsequent semesters, also plans to weave ChatGPT into lessons by asking students to evaluate the chatbot’s responses.

“What’s happening in class is no longer going to be, ‘Here are some questions — let’s talk about it between us human beings,’” he said, but instead “it’s like, ‘What also does this alien robot think?’”

Across the country, university professors like Mr. Aumann, department chairs and administrators are starting to overhaul classrooms in response to ChatGPT, prompting a potentially huge shift in teaching and learning. Some professors are redesigning their courses entirely, making changes that include more oral exams, group work and handwritten assessments in lieu of typed ones.



After one of his students confessed to using ChatGPT, Antony Aumann, a philosophy professor at Northern Michigan University, plans to implement new rules, including requiring students to write first drafts of essays in class.
Christine Lenzen for The New York Times

The moves are part of a real-time grappling with a new technological wave known as generative artificial intelligence. ChatGPT, which was released in November by the artificial intelligence lab OpenAI, is at the forefront of the shift. The chatbot generates eerily articulate and nuanced text in response to short prompts, with people using it to write love letters, poetry, fan fiction — and their schoolwork.

That has upended some middle and high schools, with teachers and administrators trying to discern whether students are using the chatbot to do their schoolwork. Some public school systems, including in New York City and Seattle, have since banned the tool on school Wi-Fi networks and devices to prevent cheating, though students can easily find workarounds to access ChatGPT.

In higher education, colleges and universities have been reluctant to ban the A.I. tool because administrators doubt the move would be effective and they don’t want to infringe on academic freedom. That means the way people teach is changing instead.

“We try to institute general policies that certainly back up the faculty member’s authority to run a class,” instead of targeting specific methods of cheating, said Joe Glover, provost of the University of Florida. “This isn’t going to be the last innovation we have to deal with.”

A New Generation of Chatbots

A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).

Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.

Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.

That’s especially true as generative A.I. is in its early days. OpenAI is expected to soon release another tool, GPT-4, which is better at generating text than previous versions. Google has built LaMDA, a rival chatbot, and Microsoft is discussing a $10 billion investment in OpenAI. Silicon Valley start-ups, including Stability AI and Character.AI, are also working on generative A.I. tools.

An OpenAI spokeswoman said the lab recognized its programs could be used to mislead people and was developing technology to help people identify text generated by ChatGPT.

At many universities, ChatGPT has now vaulted to the top of the agenda. Administrators are establishing task forces and hosting universitywide discussions to respond to the tool, with much of the guidance being to adapt to the technology.



Faculty at the University of Florida in Gainesville met recently to discuss how to deal with ChatGPT.Todd Anderson for The New York Times

At schools including George Washington University in Washington, D.C., Rutgers University in New Brunswick, N.J., and Appalachian State University in Boone, N.C., professors are phasing out take-home, open-book assignments — which became a dominant method of assessment in the pandemic but now seem vulnerable to chatbots. They are instead opting for in-class assignments, handwritten papers, group work and oral exams.

Gone are prompts like “write five pages about this or that.” Some professors are instead crafting questions that they hope will be too clever for chatbots and asking students to write about their own lives and current events.

Students are “plagiarizing this because the assignments can be plagiarized,” said Sid Dobrin, chair of the English department at the University of Florida.

Frederick Luis Aldama, the humanities chair at the University of Texas at Austin, said he planned to teach newer or more niche texts that ChatGPT might have less information about, such as William Shakespeare’s early sonnets instead of “A Midsummer Night’s Dream.”

The chatbot may motivate “people who lean into canonical, primary texts to actually reach beyond their comfort zones for things that are not online,” he said.

In case the changes fall short of preventing plagiarism, Mr. Aldama and other professors said they planned to institute stricter standards for what they expect from students and how they grade. It is now not enough for an essay to have just a thesis, introduction, supporting paragraphs and a conclusion.

“We need to up our game,” Mr. Aldama said. “The imagination, creativity and innovation of analysis that we usually deem an A paper needs to be trickling down into the B-range papers.”

Universities are also aiming to educate students about the new A.I. tools. The University at Buffalo in New York and Furman University in Greenville, S.C., said they planned to embed a discussion of A.I. tools into required courses that teach entering or freshman students about concepts such as academic integrity.

“We have to add a scenario about this, so students can see a concrete example,” said Kelly Ahuna, who directs the academic integrity office at the University at Buffalo. “We want to prevent things from happening instead of catch them when they happen.”

Other universities are trying to draw boundaries for A.I. Washington University in St. Louis and the University of Vermont in Burlington are drafting revisions to their academic integrity policies so their plagiarism definitions include generative A.I.

John Dyer, vice president for enrollment services and educational technologies at Dallas Theological Seminary, said the language in his seminary’s honor code felt “a little archaic anyway.” He plans to update its plagiarism definition to include: “using text written by a generation system as one’s own (e.g., entering a prompt into an artificial intelligence tool and using the output in a paper).”

The misuse of A.I. tools will most likely not end, so some professors and universities said they planned to use detectors to root out that activity. The plagiarism detection service Turnitin said it would incorporate more features for identifying A.I., including ChatGPT, this year.

More than 6,000 teachers from Harvard University, Yale University, the University of Rhode Island and others have also signed up to use GPTZero, a program that promises to quickly detect A.I.-generated text, said Edward Tian, its creator and a senior at Princeton University.



Lizzie Shackney, a law and design student at the University of Pennsylvania, said she saw both the value and limitations in A.I. tools.Steve Legato for The New York Times

Some students see value in embracing A.I. tools to learn. Lizzie Shackney, 27, a student at the University of Pennsylvania’s law school and design school, has started using ChatGPT to brainstorm for papers and debug coding problem sets.

“There are disciplines that want you to share and don’t want you to spin your wheels,” she said, describing her computer science and statistics classes. “The place where my brain is useful is understanding what the code means.”

But she has qualms. ChatGPT, Ms. Shackney said, sometimes incorrectly explains ideas and misquotes sources. The University of Pennsylvania also hasn’t instituted any regulations about the tool, so she doesn’t want to rely on it in case the school bans it or considers it to be cheating, she said.

Other students have no such scruples, sharing on forums like Reddit that they have submitted assignments written and solved by ChatGPT — and sometimes done so for fellow students too. On TikTok, the hashtag #chatgpt has more than 578 million views, with people sharing videos of the tool writing papers and solving coding problems.

One video shows a student copying a multiple choice exam and pasting it into the tool with the caption saying: “I don’t know about y’all but ima just have Chat GPT take my finals. Have fun studying.”



To: marcher who wrote (198550)5/2/2023 7:52:28 PM
From: TobagoJack  Read Replies (1) | Respond to of 218083
 
Open AI, otoh


OTOH

zerohedge.com
IBM To Stop Hiring For Roles That Can Be Replaced By AI; Nearly 8,000 Workers To Be Replaced By Automation

One month ago, to much dismay and widespread denial, Goldman predicted that AI could lead to some 300 million layoffs among highly paid, non-menial workers in the US and Europe. As Goldman chief economist Jan Hatzius put it, "using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work. Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300 million full-time jobs to automation" as up to "two thirds of occupations could be partially automated by AI."

[url=][/url]

Yet while Goldman's forecast was met with a emotions ranging from incredulity to outright mockery, it may not have been too far off the mark.

Consider that just last week, Dropbox said it would lay off 16% of the company, some 500 employees as the company sought to build out its AI division. In a memo to employees, Dropbox CEO Drew Houston said that “in an ideal world, we’d simply shift people from one team to another. And we’ve done that wherever possible. However, our next stage of growth requires a different mix of skill sets, particularly in AI and early-stage product development. We’ve been bringing in great talent in these areas over the last couple years and we’ll need even more.”

The changes we’re announcing today, while painful, are necessary for our future,” Houston notes. “I’m determined to ensure that Dropbox is at the forefront of the AI era, just as we were at the forefront of the shift to mobile and the cloud. We’ll need all hands on deck as machine intelligence gives us the tools to reimagine our existing businesses and invent new ones.”

But while Dropbox's layoffs were lateral, and meant to open up space for more AI linked hires, in the case of IBM, it is AI itself that is making workers redundant.

As Bloomberg reports, IBM CEO Arvind Krishna said the company expects to pause hiring for roles it thinks could be replaced with artificial intelligence in the coming years. As a result, hiring in back-office functions — such as human resources — will be suspended or slowed, Krishna said in an interview. These non-customer-facing roles amount to roughly 26,000 workers, Krishna said. “I could easily see 30% of that getting replaced by AI and automation over a five-year period.” That would mean roughly 7,800 jobs lost.

Part of any reduction would include not replacing roles vacated by attrition, an IBM spokesperson said.

Krishna’s plan marks one of the largest workforce strategies announced in response to the rapidly advancing technology; it certainly won't be the last as virtually all companies follow in IBM's footsteps and layoffs tens if not hundreds of millions of workers in the coming years.

Mundane tasks such as providing employment verification letters or moving employees between departments will likely be fully automated, Krishna said. And while some HR functions, such as evaluating workforce composition and productivity, probably won’t be replaced over the next decade, it is only a matter of time before these roles are also replaced by AI.

IBM currently employs about 260,000 workers and continues to hire for software development and customer-facing roles. Finding talent is easier today than a year ago, Krishna said. The company announced job cuts earlier this year, which may amount to about 5,000 workers once completed. Still, Krishna said IBM has added to its workforce overall, bringing on about 7,000 people in the first quarter.

The Armonk, New York-based IBM beat profit estimates in its most recent quarter due to expense management, including the earlier-announced job cuts. In the past IBM had managed to manipulate its stock higher thanks to billions in stock buybacks (at much higher prices). But once its debt load grew too big, the buyback game ended, Warren Buffett sold his shares, and the stock price has languished for over half a decade. And since the company's revenue is stagnant at best, its only hope is to drastically cut overhead.

Enter AI: new "productivity and efficiency" steps - read replacing workers with algos - are expected to drive $2 billion a year in savings by the end of 2024, Chief Financial Officer James Kavanaugh said on the day of earnings.

Helping the company's imminent transition to an AI-staffed corporation will be the coming recession. Until late 2022, Krishna said he believed the US could avoid a recession. Now, he sees the potential for a “shallow and short” recession toward the end of this year, although it remains unclear just how once can determine that a recession will be "shallow and short".