NASA gave spiders caffeine. Their webs went chaotic. Sedatives made webs incomplete. Each substance left a distinct signature in the pattern.
Scientists documented human impairment the same way. Drunk speech has more pauses, sloppy consonants, leaked emotions. Fatigue shortens sentences. Sleep deprivation adds repetition. Each state has a measurable signature.
What if you try to make an LLM drunk?
The Experiment
Use PyTorch to open a language model. Modify how layers actually behave. Change attention. Alter activations. Rewire information flow. Invent disruptions nobody has tried.
The Questions
Does your modification produce patterns that match documented human impairment? Can you look at the output and identify what was changed — like scientists reading spider webs?
Imitation vs Discovery
Train an AI on drunk text and it copies. That's imitation. Modify the architecture and observe what emerges. That's discovery. Which modifications produce which signatures? The spider research mapped substances to web patterns. You map architectural changes to language patterns.
Webs revealed how substances alter spider brains. What will language reveal?
Tools needed: Consumer GPU • PyTorch • Open-source LLM • Published speech research papers

