• IP addresses are NOT logged in this forum so there's no point asking. Please note that this forum is full of homophobes, racists, lunatics, schizophrenics & absolute nut jobs with a smattering of geniuses, Chinese chauvinists, Moderate Muslims and last but not least a couple of "know-it-alls" constantly sprouting their dubious wisdom. If you believe that content generated by unsavory characters might cause you offense PLEASE LEAVE NOW! Sammyboy Admin and Staff are not responsible for your hurt feelings should you choose to read any of the content here.

    The OTHER forum is HERE so please stop asking.

Angmor Amazon Cloud CEO tells his ah neh coders they will be replaced by AI in 2 years

Rogue Trader

Alfrescian (Inf)
Asset
Joined
Aug 29, 2008
Messages
26,181
Points
113

Amazon Cloud CEO Predicts a Future Where Most Software Engineers Don't Code — and AI Does It Instead​

In a leaked chat, Garman told Amazon employees that in about two years, "it's possible that most developers are not coding."​

By Sherin ShibuEdited by Melissa Malamut
Aug 21, 2024

Key Takeaways​


  • Matt Garman became CEO of Amazon Web Services in June.

  • In a leaked recording obtained by Business Insider, Garman told employees that AI changes a software engineer's job description.

  • Innovation will take the place of coding, he said, and developers will need to think more about the end product.
AI is shaking up industries — and software engineering is no exception.

In a leaked recording of a June fireside chat obtained by Business Insider, Amazon Web Services CEO Matt Garman reportedly told employees that AI is changing what being a software engineer means —and essentially changes the job description.

"If you go forward 24 months from now, or some amount of time — I can't exactly predict where it is — it's possible that most developers are not coding," Garman said, adding later that the developer role would look different next year compared to 2020.


1724267092_MattGarman.jpg
Matt Garman, CEO of AWS. Photo Credit: Amazon

Garman took over as CEO of AWS on June 3 after nearly two decades in the division. He joined as a full-time product manager in 2006 when AWS had just three people on its worldwide sales team.

In the leaked chat, Garman said that innovation will replace coding, which means developers will have to think more about the end product.

"It just means that each of us has to get more in tune with what our customers need and what the actual end thing is that we're going to try to go build because that's going to be more and more of what the work is as opposed to sitting down and actually writing code," he reportedly stated.

AWS currently has about 130,000 employees, having laid off several hundred people in April in its sales, marketing, and global services divisions.

Marco Argenti, the CIO of Goldman Sachs, expressed a similar sentiment in April — technical skills alone were not enough to handle AI.

To keep up with the technology, Agenti encouraged future engineers, including his own college-age daughter, to study philosophy in addition to engineering.

Philosophy would give engineers the reasoning abilities and mental framework to keep up with AI, detect hallucinations, and challenge its output, according to Argenti.
 
Fucking cheebye kelings are not programmers to begin with. Fuck their mothers black smelly cunt.

The only thing they do best us oar ginfreely cheebye until water a lot.
they programme their phones to ring
 
Good piece by Marco Argenti - CIO of Goodman Sachs

Analytics and data science

Why Engineers Should Study Philosophy​


by Marco Argenti
April 16, 2024

Summary. The ability to develop crisp mental models around the problems you want to solve and understanding the why before you...more

I recently told my daughter, a college student: If you want to pursue a career in engineering, you should focus on learning philosophy in addition to traditional engineering coursework. Why? Because it will improve your code.

Coming from an engineer, that might seem counterintuitive, but the ability to develop crisp mental models around the problems you want to solve and understanding the why before you start working on the how is an increasingly critical skill, especially in the age of AI.

Coding is one of the things AI does best. Often AI can write higher quality code than humans, and its capabilities are quickly improving. Computer languages, you see, use a vocabulary that’s much more limited than human languages. And because an AI model’s complexity increases quadratically with the universe of symbols that represent the language that’s understood by the AI, a smaller vocabulary means faster, better results.

However, there’s something of a catch here: Code created by an AI can be syntactically and semantically correct but not functionally correct. In other words, it can work well, but not do what you want it to do. A model’s output is very sensitive to the way a prompt is written. Miss the mark on the prompt, and your AI will produce code that’s plausible at best, incorrect and dangerous at worst.

In the emerging discipline called “prompt engineering” — at this stage more of an art than a science — users learn how to hand-craft prompts that are compact, expressive, and effective at getting the AI to do what they want. Various techniques exist, such as few-shot prompting, where one prepends a number of examples to the prompt to guide the AI toward the right path, sometimes with questions and answers. For example, for sentiment analysis using few-shot prompting, a user might input a prompt like “Analyze the sentiment of sentences in an earnings call” followed by specific examples such as “Improved outlook: Positive” or “Slowing demand: Negative” to help the AI understand the pattern and context for generating accurate sentiment analyses based on examples.

One of the most important skills I’ve learned in decades of managing engineering teams is to ask the right questions. It’s not dissimilar with AI: The quality of the output of a large language model (LLM) is very sensitive to the quality of the prompt. Ambiguous or not well-formed questions will make the AI try to guess the question you are really asking, which in turn increases the probability of getting an imprecise or even totally made-up answer (a phenomenon that’s often referred to as “hallucination”). Because of that, one would have to first and foremost master reasoning, logic, and first-principles thinking to get the most out of AI — all foundational skills developed through philosophical training. The question “Can you code?” will become “Can you get the best code out of your AI by asking the right question?”

Zooming out a bit, the dependency of AI performance on the quality of the mental models expressed by the user prompting the AI suggests a fundamental shift in the relationship between authors and readers, and, in general, to our relationship to knowledge. In a way, it offers a parallel to the invention of the printing press, which democratized information through mass production of books, and the creation of libraries and universities. Before the printing press, if you wanted to learn about mathematics, for example, you likely had to have physical access to a mathematician or access to a hand-copied text, likely purchased at great expense. Printed books made that barrier much lower, and the internet reduced it to virtually zero. Still, a barrier remained which is the knowledge gap between the author and the reader. You can have access to any paper or book in the world, but they are of little use if you can’t understand them.

Working with AI, that relationship changes, as does the notion of authorship. An LLM adapts its content to the level of knowledge and understanding of the reader, taking cues from their prompts. The reader’s prompt is the seed that triggers an AI to produce content, drawing on the works in its training data to create a new text specifically for that user — the reader is, in a sense, both consumer and author. Using the mathematics example, if you wanted to understand the concept of limits in calculus, you could find a textbook aimed at high school or college students or attempt to find a source on the Internet that matches your current level of understanding. An AI model, on the other hand, can provide personalized and adaptive instruction tailored to your level of understanding and learning style. There may be a future where the gold standard of learning — personalized tutoring — may be available to everyone. The consequences of that are unimaginable.

Generative AI changes our relationship with knowledge, flattening barriers that not only provide access to it, but also explain it in a tailored approach. It creates a gentle slope between your level of knowledge and the level of knowledge required to attack a particular subject. But the ability to access knowledge that is appropriately tailored and, more importantly, accurate, starts — and ends — with the user. As knowledge gets easier to obtain, reasoning becomes more and more important. But the use of those philosophical thinking skills does not end once you get the output you think you were looking for — the job is not yet done. As we know, AIs can make mistakes, and they’re particularly good at making incorrect outputs seem plausible, making the ability to discern truth another hugely important skill. In order to engage with the technology in a responsible way that gets us the appropriate and accurate information we want, we must lead with a philosophical mindset and a healthy dose of skepticism and common sense throughout the entire journey.

There was a point in time where, in order to create a computer program, I had to physically flip switches or punch holes in a paper card. That creation process was at ground level with the intricacies of how many bits of memory or registers the computer possessed. With billions of transistors and trillions of memory cells, our software creative process had to raise to higher and higher levels with the creation of computer languages that would abstract the complexity of the underlying hardware, allowing developers to focus almost entirely on the quality of the algorithm versus the ones and the zeros.

Today, we are at a point where computers (i.e., AI) do not need this intermediate level of translation between the language we speak and the one they understand. We can set aside the Rosetta Stone and just speak English to a computer. They will likely understand just as well as when speaking to them in Python. This immediately presents two choices: We can get lazy, or we can elevate our thought.

When language is no longer the barrier, we can employ the full expressivity of human language to convey to the AI higher concepts and logic, that would capture our request in the most compact and effective way, in a way that’s declarative (focused on the result we want to obtain) vs. imperative (focused on the steps of how to get there). Imperative: Turn left, then go straight, then left again, then (1,000 times). Declarative: Take me home. I’ve seen people on social media create entire games with just a few skillfully written prompts that in the very recent past would have taken months to develop.

Which is back to my original point: Having a crisp mental model around a problem, being able to break it down into steps that are tractable, perfect first-principle thinking, sometimes being prepared (and able to) debate a stubborn AI — these are the skills that will make a great engineer in the future, and likely the same consideration applies to many job categories.

We don’t want to lose the ability to open the hood when needed and fix things that an AI may have missed or be in a position (importantly) to audit what an AI has created. That would be a real problem for humans, and we won’t likely let that happen — we still have to build the AIs at the very least. However, that would only take us partially there. Automating the mechanics of code creation and focusing on our critical thinking abilities is what will allow us to create more, faster and have disproportionate impact on the world. Helping AI help us be more human, less computer.

MA
Marco Argenti is the Chief Information Officer at Goldman Sachs.
 
SkyNet will become sentient and all of us will become slaves.
 
Back
Top