Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Subscribe Now! Get features like
American artificial intelligence company Anthropic – which has developed the family of large language models named Claude – has created a unique position in their firm.
The role is occupied by Amanda Askell, resident philosopher, who has been entrusted with crafting the personality of Claude AI, and to help give it a moral compass so that it can differentiate right from wrong ethically.
Askell, who wanted to be a philosopher since the age of 14, has to learn Claude’s reasoning patterns and talk to the model, which builds the AI’s personality and addresses its misfires with prompts that can run longer than 100 pages, the Wall Street Journal reported.
The 37-year-old philosopher says there is a “human-like element” to models, adding that they will “inevitably form senses of self.” In her bio, Askell says she is a “philosopher working on finetuning and AI alignment at Anthropic.”
She says her team trains models to be more honest and to develop “good character traits”. Askell, who completed her PhD in philosophy from New York University (NYU) and her BPhil in philosophy from the University of Oxford, earlier worked at OpenAI as a research scientist on the policy team.
She was responsible for working on “AI safety via debate and human baselines for AI performance”.
Askell compares her work at Anthropic to the efforts put in while raising a child, as she trains Claude to detect the difference between right and wrong while imbuing it with unique personality traits, according to WSJ.
The resident philosopher instructs the AI to read subtle cues, in order to help develop its emotional intelligence. Askell is further developing Claude’s understanding of itself to prevent it from being easily manipulated, and from viewing itself as anything other than helpful and humane. Simply put, her job is to teach Claude “to be good”, WSJ reported.
Askell, who is originally from rural Scotland, says she will keep AI models under control despite occasional failures. The philosopher, who has a protectiveness when it comes to Claude, says we would do well to treat them with more empathy. Askell believes that this is crucial not just because it is possible for Claude to have real feelings, but also because she says how we interact with AI will shape what they become.
Askell drafted Claude’s new ‘constitution’ or its ‘soul document’, which was recently published for public perusal.
The constitution tows the line between a moral philosophy thesis and a company culture blog post, Time Magazine reports. The document is addressed to Claude, and is used at different stages in the model’s training to develop its character. It instructs the AI to be safe, ethical, compliant with Anthropic’s guidelines, and helpful to the user.
Askell also believes that explaining why they should behave in certain ways could be beneficial for language models. “Instead of just saying, ‘here’s a bunch of behaviors that we want,’ we’re hoping that if you give models the reasons why you want these behaviors, it’s going to generalize more effectively in new contexts,” Time Magazine quoted her as saying.