Saturday, September 9, 2023

New top story on Hacker News: Show HN: WhatsApp-Llama: A clone of yourself from your WhatsApp conversations

Show HN: WhatsApp-Llama: A clone of yourself from your WhatsApp conversations
3 by advaith08 | 0 comments on Hacker News.
Hello HN! I've been thinking about the idea of a LLM thats a clone of me - instead of generating replies to be a helpful assistant, it generates replies that are exactly like mine. The concept's appeared in fiction numerous times (the talking paintings in Harry Potter that mimic the person painted, the clones in The Prestige), and I think with LLMs, there might actually be a possibility of us doing something like this! I've just released a fork of the facebookresearch/llama-recipes which allows you to fine-tune a Llama model on your personal WhatsApp conversations. This adaptation can train the model (using QLoRA) to respond in a way that's eerily similar to your own texting style. What I've figured out so far: Quick Learning: The model quickly adapts to personal nuances, emoji usage, and phrases that you use. I've trained just 1 epoch on a P100 GPU using QLoRA and 4 bit quantization, and its already captured my mannerisms Turing Tests: As an experiment, I asked my friends to ask me 3 questions, and responded with 2 candidate responses (one from me and one from llama). My friends then had to guess which candidate response was mine and which one was Llama's. Llama managed to fool 10% of my friends, but with more compute, I think it can do way better. Here's the GitHub repository: https://ift.tt/QhTrRtx Would love to hear feedback, suggestions, and any cool experiences if you decide to give it a try! I'd love to see how far we can push this by training bigger models for more epochs (I ran out of compute credits)

No comments:

Post a Comment