“The key to success is going to bed a little smarter each day.” Warren Buffett – Investor (born 1930)
Shari Lewis

She found out her show was cancelled by overhearing executives on an elevator—they didn’t even know she was standing behind them.
In 1963, Shari Lewis was one of the most talented performers in television.
She could sing. She could dance. She could conduct a symphony orchestra. She could perform ventriloquism so precisely that audiences forgot they were watching a woman with a sock on her hand.
She had trained at the American School of Ballet. Studied acting with Sanford Meisner. Learned piano at two years old. Won a Peabody Award. Hosted a show that ran live television without error—week after week, year after year.
And NBC executives decided she was replaceable.
They cancelled The Shari Lewis Show to make room for cartoons.
She wasn’t told directly. She learned about it while standing in an elevator, listening to men in suits discuss the decision as if she wasn’t there.
“All of it… my entire field crashed around my ears,” she said later.
The industry had made its position clear: children’s television was filler. If the audience was young, the work didn’t count. And the woman who created that work? She was a novelty. A mascot. Not an artist.
But here’s what they got wrong about Shari Lewis:
She didn’t need their permission.
When American networks abandoned live children’s programming, Lewis moved to England and hosted a show on the BBC for eight years. When that work dried up, she performed in Las Vegas casinos, touring companies of Broadway shows, and appeared on variety programs with Ed Sullivan and Johnny Carson.
When those opportunities faded, she reinvented herself again.
She became one of the few female symphony conductors in the world—performing with over 100 orchestras, including the national symphonies of the United States, Canada, and Japan. She learned to speak Japanese for her performances there.
Once, she walked onto a stage at a state fair and found only four people in the audience.
She did the show anyway.
That was who Shari Lewis was.
Not a puppet act. Not a children’s entertainer waiting for permission. A performer who controlled timing, voice, pacing, and audience attention with surgical precision—and refused to stop working just because an industry decided she wasn’t serious.
Then, nearly 30 years after that elevator conversation, PBS came calling.
In 1992, at 59 years old, Lewis launched Lamb Chop’s Play-Along.
The show won five consecutive Emmy Awards. It was the first children’s program in seven years to beat Sesame Street for a writing Emmy. A new generation of children fell in love with Lamb Chop, Charlie Horse, and Hush Puppy—the same characters executives had declared “outdated” three decades earlier.
The audience hadn’t moved on. The industry had simply stopped paying attention.
Lewis didn’t treat this as a comeback. She treated it as what it always was: a correction.
She testified before Congress in 1993 to advocate for children’s television. Lamb Chop was granted special permission to speak. When elementary schools started cutting music programs, Lewis created The Charlie Horse Music Pizza to teach children about music through entertainment.
She was still innovating. Still refusing to be small.
In June 1998, Lewis was diagnosed with uterine cancer and given six weeks to live. She was in the middle of taping new episodes.
She finished them anyway.
Her final performance was a song called “Hello, Goodbye.” Her crew held back tears as she sang. She was saying goodbye to them, to the children watching, and to the character who had been her partner for over 40 years.
Shari Lewis died on August 2, 1998. She was 65 years old.
The industry remembered her fondly. It always does when it’s too late.
But her work didn’t need their remembrance. It endured on its own terms—passed down from parents to children, from one generation to the next, because the audience always knew what the executives never understood:
Precision is not small just because it serves children.
Craft is not diminished by joy.
And the woman who made a sock puppet come alive was never the novelty.
She was the reason it worked at all.
Doug Engelbart

The wooden box had three metallic wheels.
It looked like a toy, or perhaps a piece of scrap assembled in a garage, but the man holding it believed it was the key to the human mind.
It was December 9, 1968.
In the cavernous Brooks Hall in San Francisco, more than a thousand of the world’s top computer scientists sat in folding chairs, waiting. They were used to the roar of air conditioning units cooling massive mainframes. They were used to the smell of ozone and the stack of stiff paper punch cards that defined their working lives.
They were not used to Douglas Engelbart.
He sat alone on the stage, wearing a headset that looked like it belonged to a pilot, staring at a screen that flickered with a ghostly green light. Behind the scenes, a team of engineers held their breath, praying that the delicate web of wires and microwave signals they had cobbled together would hold for just ninety minutes.
If it worked, it would change how humanity thought.
If it failed, Douglas Engelbart would simply be the man who wasted millions of taxpayer dollars on a fantasy.
The world of 1968 was analog.
Information lived on paper. If you wanted to change a paragraph in a report, you retyped the entire page. If you wanted to send a document to a colleague in another city, you put it in an envelope and waited three days. If you wanted to calculate a trajectory, you gave a stack of cards to an operator, who fed them into a machine the size of a room, and you came back the next day for the results.
Computers were calculators. They were powerful, loud, and distant. They were owned by institutions, guarded by specialists, and kept behind glass walls. The idea that a single person would sit in front of a screen and “interact” with a computer in real-time was not just technically difficult; it was culturally absurd.
Engelbart, a soft-spoken engineer from Oregon, saw it differently.
He had grown up in the Depression, fixing water pumps and electrical lines. He understood tools. He believed that the problems facing humanity—war, poverty, disease—were becoming too complex for the unassisted human brain to solve. We needed better tools. We needed to “augment human intellect.”
For years, he had run a lab at the Stanford Research Institute (SRI). While others focused on making computers faster at math, Engelbart’s team focused on making them responsive. They built systems that allowed a user to point, click, and see results instantly.
They called their system NLS, or the “oN-Line System.”
It was a radical departure from the status quo. To the establishment, computing was serious business involving batch processing and efficiency. Engelbart was talking about “manipulating symbols” and “collaboration.”
The pressure on Engelbart was immense.
The funding for his Augmentation Research Center came from ARPA (the Advanced Research Projects Agency), the same government body responsible for military technology. They had poured significant resources into his vision, but results were hard to quantify. There were no enemy codes broken, no missile trajectories calculated. Just a group of men in California moving text around on a screen.
The critics were loud. They called him a dreamer. They said his ideas were “pie in the sky.” Why would anyone need to see a document on a screen when typewriters worked perfectly fine? Why would anyone need to point at data?
This presentation was his answer.
It was an all-or-nothing gamble.
To make the demonstration work, Engelbart wasn’t just using a computer on the stage. The machine itself—an SDS 940 mainframe—was thirty miles away in Menlo Park. He was controlling it remotely.
His team had leased two video lines from the telephone company, a massive expense and logistical nightmare. They had set up microwave transmitters on the roof of the civic center and on a truck parked on a ridge line to relay the signal.
In 1968, sending a video signal and a data signal simultaneously over thirty miles to a live audience was the equivalent of a moon landing.
The computer industry was built on a specific, rigid logic.
Computing Logic: Computers are scarce, expensive resources. Human time is cheap; computer time is expensive. Therefore, humans must prepare work offline (punch cards) to maximize the machine’s efficiency. Interactive computing wastes the machine’s time.
This logic governed the industry. It was why IBM was a titan. It was why office workers sat in rows with typewriters. It was the “correct“ way to do things.
It worked perfectly—until it met Douglas Engelbart.
Engelbart believed that human time was the precious resource, not the machine’s. He believed the machine should serve the mind, even if it was “inefficient” for the hardware.
As the lights went down in Brooks Hall, the hum of the crowd faded.
Engelbart looked small on the big stage. The screen behind him, a massive projection of his small monitor, glowed into life.
He spoke into his microphone, his voice steady but quiet.
“If in your office, you as an intellectual worker were supplied with a computer display, backed up by a computer that was alive for you all day, and was instantly responsive to every action you had, how much value could you derive from that?”
It was a question nobody had ever asked.
He moved his right hand.
On the massive screen, a small dot moved.
The audience froze.
He wasn’t typing coordinates. He wasn’t entering a command code. He was simply moving his hand, and the digital ghost on the screen followed him. He was using the wooden box with the wheels—the device his team had nicknamed “the mouse” because the cord looked like a tail.
Today, a cursor moving on a screen is as natural as breathing. In 1968, it was magic.
But he didn’t stop there.
He clicked on a word. It was highlighted.
He deleted it. It vanished.
The text around it snapped shut to fill the gap.
A murmur ran through the hall. He wasn’t rewriting the page. He
was manipulating information as if it were a physical object, yet it was made of light.
He showed them a “grocery list.” He categorized items. He collapsed the list so only the headers showed, then expanded it again to show the details.
He called this “view control.” We call it windowing.
He showed them a map. He clicked a link, and the screen jumped to a detailed diagram of a component. He clicked back, and he was at the map again.
He called it “hypermedia.” We call it the internet.
The demonstration continued, each minute adding a new impossibility to the list.
The tension in the control room was suffocating. Every second the system stayed online was a victory against the laws of probability. A single blown fuse, a misaligned microwave dish, a software bug—any of it would have turned the screen black and ended the dream.
Then came the moment that truly broke the room.
Engelbart introduced a colleague, Bill Paxton.
Paxton wasn’t on stage. He was thirty miles away, sitting in the lab at SRI.
His face appeared in a window on the screen, crisp and clear.
The audience gasped.
They were looking at a man in Menlo Park, while listening to a man in San Francisco, both looking at the same document on the same screen.
“Okay, Bill,” Engelbart said. “Let’s work on this together.”
On the screen, two cursors appeared. One controlled by Engelbart, one by Paxton.
They edited the text together. Engelbart would point to a sentence, and Paxton would paste it into a new location. They were collaborating, in real-time, across a distance, using a shared digital workspace.
It was Google Docs, Zoom, and Slack, demonstrated a year before the internet (ARPANET) even existed.
The audience, composed of the smartest engineers in the world, sat in stunned silence. They were watching science fiction become a documentary.
They weren’t just seeing new gadgets. They were seeing the destruction of their entire worldview. The idea of the solitary computer operator was dead. The idea of the computer as a mere calculator was dead.
Engelbart was showing them a window into a world where minds could connect through machines.
He typed, he clicked, he spoke. He operated a “chorded keyset” with his left hand, entering commands as fast as a pianist, while his right hand flew across the desk with the mouse. He was a conductor of information.
For ninety minutes, the system held.
The microwave links stayed true. The software didn’t crash. The mainframe thirty miles away processed every command.
When Engelbart finally took off the headset and the screen went dark, there was a pause.
A hesitation.
Then, the audience stood.
It wasn’t a polite golf clap. It was a roar. It was the sound of a thousand experts realizing that everything they knew about their field had just become obsolete.
They rushed the stage. They wanted to touch the mouse. They wanted to see the keyset. They wanted to know how he did it.
The “Mother of All Demos,” as it was later christened, did not immediately change the market. Engelbart did not become a billionaire. He was a researcher, not a salesman. His system was too expensive and too complex for the 1970s.
But the seeds were planted.
Sitting in the audience were the young engineers who would go on to work at Xerox PARC. They would take the mouse, the windows, and the graphical interface, and they would refine them.
Steve Jobs would visit Xerox PARC a decade later, see the descendants of Engelbart’s mouse, and use them to build the Macintosh.
Bill Gates would see it and build Windows.
Tim Berners-Lee would use the concept of hypermedia to build the World Wide Web.
Every smartphone in a pocket, every laptop in a cafe, every video call made to a loved one across the ocean—it all traces back to that ninety-minute window in 1968.
Douglas Engelbart died in 2013. He never sought fame. He watched as the world caught up to the vision he had seen clearly half a century before.
He proved that the pressure of the status quo—the belief that “this is how it’s always been done”—is brittle. It can be broken by a single person with a wooden box and the courage to show us what is possible.
The system said computers were for numbers.
He showed us they were for people.
Sources: Detailed in “The Mother of All Demos” archives (SRI International). Smithsonian Magazine, “The 1968 Demo That Changed Computing.” New York Times obituary for Douglas Engelbart, 2013. Summary of events from the Doug Engelbart Institute records.
Dr Makis On Ivermectin

According to Dr. William Makis, ivermectin cream can effectively treat multiple skin ailments, including rosacea, cystic acne, eczema, psoriasis, and even skin cancer. “You can literally use it for any inflammatory or autoimmune skin condition.”
New research explains why some minds stay awake at night

- Insomnia keeps your mind in daytime problem-solving mode at night, which prevents the natural mental drift that helps you fall asleep
- New research shows that people with insomnia have weaker circadian signals, making it harder for the brain to shift from alert thinking into the dream-like patterns that support rest
- Sequential thinking stays elevated at night in insomniacs, creating racing thoughts and mental loops that make it difficult to unwind
- Strengthening your circadian rhythm through morning light exposure, dim evening lighting, and consistent nighttime cues helps your brain recognize when to power down
- Simple practices such as cognitive shuffling, sensory-based grounding during nighttime awakenings, and daily movement — especially walking — support a clearer day-night contrast and more restorative sleep
Secret Food Combinations

Brutal breakfast reality check – Oatmeal or Omelette?

Dr. Mark Hyman shares a Harvard study that will make you rethink breakfast:
Overweight teens ate the exact same calories in three different meals — oatmeal, steel-cut oats, or omelette.
Result?
Oatmeal group: sky-high insulin, cortisol & adrenaline (like being chased by a tiger) ate 81% more food later.
Steel-cut oats: still 50% more than an omelette.
Omelette group stayed satisfied longest.
Moral: Start your day with protein + fat — not starch or sugar (no muffins, bagels, oatmeal, pancakes…).
Who’s switching to eggs/bacon/avocado tomorrow?
Millions of your mother’s cells persist inside you, and now we know how
Every human born on this planet is not entirely themselves.
A tiny fraction of our cells – around one in a million – is actually not our own, but comes from our mothers. That means each of us has millions of cells that our immune systems would normally recognize as foreign; yet somehow, in most of us, they hang around peacefully without causing any immune problems.
Now, immunologists have figured out why. A small number of maternal immune cells that cross the placenta during pregnancy actively train the fetus’s immune system to tolerate the mother’s cells for their entire life.
The exchange of cells between a mother and a fetus is a well-documented phenomenon that scientists have known about for more than 50 years. It’s called microchimerism, and it goes both ways: every human who has ever been pregnant retains cells from their fetus, and every human retains cells from their mother.
These lingering cells pose a puzzle for immunology, which is built around the idea that the immune system should mount an attack against foreign cells.
A team led by pediatric infectious disease specialist Sing Sing Way of Cincinnati Children’s Hospital Medical Center wanted to understand more about how these foreign maternal cells keep the immune system in check, and what role they play in shaping the fetus’ immune system.
To find out, the researchers studied maternal microchimerism in mice. Building on their previous studies, the researchers bred mice with immune cells engineered to express specific cell surface markers. This allowed researchers to selectively deplete those cells and see whether or not immune tolerance was maintained.
Here’s where it got fascinating. A small subset of the maternal immune cells, with properties similar to bone marrow myeloid cells and dendritic cells, persisted long after birth. They were also strongly associated with both immune activity and the expansion of regulatory T cells – the cells that tell the immune system that everything is copacetic.
To confirm, the researchers next selectively edited out those specific maternal cells in offspring mice.
The results were dramatic. The regulatory T cells disappeared, and the immune tolerance of maternal cells disappeared.
The implication is that lifelong tolerance to maternal microchimeric cells is probably dependent on just a tiny subset of maternal cells. Take those away, and immune chaos likely ensues. That also means that immune tolerance needs to be continuously and actively maintained; it’s not a one-and-done process during pregnancy.
That’s interesting and exciting in its own right, but the research also offers a way to gain a greater understanding of the broad swath of diseases and conditions to which microchimerism may contribute.
“The new tools we developed to study these cells will help scientists pinpoint exactly what these cells do and how they work in a variety of contexts including autoimmune disease, cancer and neurological disorders,” Way says.
“Microchimerism is increasingly linked with so many health disorders. This study provides an adaptable platform for scientists to investigate whether these rare cells are the cause of disease, or alternatively, found in diseased tissue at increased levels as part of the natural healing process.”
The research has been published in Immunity.
Expressive Peppers

Why BlackRock Just Moved $2.1 Trillion Out of America (And What It Means for You)

The four stages of empire and where we are with the US empire at present, and, more importantly, what you can do about it.
Click to view the video: https://www.youtube.com/watch?v=7iHuk5G1h-I
