Welcome to today’s #HealthXPh discussion on making digital health work in real-world settings. I’m @bonedoc, an orthopedic surgeon who’s been practicing in the Philippines for over twenty years, focusing on workflow redesign and systems improvement. Today we’ll explore three critical questions facing clinician-innovators everywhere, plus dig into some cross-cutting challenges you’ve all been asking about.
Our format: #HealthXPh chat is an hour long conversation of healthcare professionals on #bluesky moderated by a host who chose a topic for the week and write the anchor post as a back rounder for the participating audience. The convo revolves around three main questions which the participants answers by appending #healthxph to their bluesky posts. I’ll kick off each question with my perspective, then open it to our panelists and the audience. Jump in anytime—this is a conversation, not a lecture.
T1: Starting a New Procedure or Pathway
“When introducing a new interprofessional procedure or pathway, how do you start it—who you brief, what you measure on day 1, and what safety stop builds trust?”
My approach—the “concentric circle briefing”:
Week -2: Brief your champions first— residents who’ll execute, head nurse, relevant specialists. Ask them: “What could go wrong?” Their concerns become your safety stops.
Week -1: Brief department head and quality officer. Show them your safety metrics and stopping rules. In my experience, showing you’ve thought about failure wins more support than showing success.
Day 1: Five-minute huddle before each case. One sentence about what’s different, one about what stays the same, and one clear safety stop: “If X happens, we return to standard protocol immediately, no questions asked.”
What I measure on day one:
- Time metrics (procedure duration, turnover time)
- Safety events (any deviation from expected course)
- Team confidence score (1-5 scale, anonymous, after each case)
The safety stop that builds trust: When we introduced a new minimally invasive approach, our safety stop was: “If we can’t achieve adequate visualization within 15 minutes, we convert to open.” We triggered it twice in our first ten cases. Those conversions built trust because we’d named it upfront.
Additional Questions for participants:
- What’s your go-to safety stop for new protocols?
- How do you measure team readiness, not just clinical outcomes?
- Has anyone tried something different than my “concentric circle” approach?
Open to audience: What’s stopped you from piloting a new procedure? What would make you feel safe to try?
T2: The Digital Tool That Made Adoption Inevitable
“Name a simple digital tool (or tweak) that saved clinicians time in your setting; what made adoption inevitable rather than optional?”
My example: Viber/Facebook/Socmed-based imaging and appointment system
Prior to the pandemic, our residents spent 20 minutes per patient hunting for old radiographs. We implemented:
- OPD clinic Viber/FB messenger account (free, works on any phone)
- Computers with Xray viewers on every OPD clinic.
- Networked Photos of X-rays/CTs immediately after reading, tagged with patient name and date
- Automated appointment reminders through Messenger chat bot (free)
- One-page Google Form (8 required fields) replacing 3 pages of handwritten notes
What made adoption inevitable:
- Visible time savings within the first week: Residents could retrieve imaging in seconds rather than hunting through filing rooms. When you save meaningful time, you don’t need a mandate.
- Solved a pain point, not an administrator’s wish: This came directly from a resident saying they spent more time looking for films than looking at patients.
- Zero training required: Everyone already knew Viber/FB Messenger. The Google Form auto-populated from existing patient lists.
- The critical tweak: We made the old way harder than the new way. We reduced printing of duplicate imaging reports. Want an old X-ray? You could walk to the basement filing room or open Viber. The path of least resistance became the digital path.
That’s the secret: Don’t make digital adoption optional and easier. Make it inevitable because the alternative wastes time everyone wants back.
Questions for participants:
- What’s your “app moment”—the simple tool that just worked?
- How do you make the old way harder without alienating your team?
- Any free/low-cost tools that surprised you with their adoption rate?
Audience challenge: In the chat, drop your “simple tool that saved time” story. Let’s crowdsource a resource list.
Question 3: Reliability Practice for Early Wins
“Which one reliability practice (checklist, escalation rule, huddle) yields the biggest early win, and how do you prevent drift after week 3?”
My answer: The 10-minute morning safety huddle wins fastest—but only if you protect it fiercely after novelty wears off.
What the huddle looks like:
- 8:00 AM sharp, every day, standing room only (keeps it short)
- Three questions per patient:
- “What could kill this patient today?”
- “What’s the plan to prevent it?”
- “Who owns each action item?”
Why it yields early wins:
Research supports this approach. Studies in pediatric ICUs have found that implementing daily huddles leads to high knowledge of practice changes among staff and is time-efficient. One surgical unit study showed daily safety huddle compliance increased from 73% to 97%, with hundreds of safety issues addressed, the majority pertaining to infection control and medication errors.
The benefits I’ve observed include:
- Reduced communication errors: When nurses hear the plan directly from physicians, miscommunication decreases dramatically
- Faster learning for juniors: First-year residents learn escalation patterns much faster because they hear senior decision-making out loud daily
- Culture shift: Practicing “What could go wrong today?” makes discussing “What went wrong yesterday?” natural rather than accusatory
Preventing drift after week three (where most initiatives die):
The HUSH project across 92 wards in five UK hospitals found that successful embedding of patient safety huddles took an average of 19.6 weeks—this tells us sustainability requires intentional effort.
My anti-drift strategies:
- Anchor to an unchangeable event: Not “8 AM-ish,” but “immediately after night team sign-out.” Link it to something that must happen anyway.
- Measure one metric publicly: Track a specific outcome (like communication-related safety events) on a visible whiteboard. When the metric trends unfavorably, the team self-corrects.
- Rotate the facilitator: Every week, a different person leads—consultants, fellows, senior nurses. This prevents it from becoming one person’s initiative.
- Build in kill switch reviews: At week six and week twelve, ask: “Is this huddle still useful, or is it theater?” Permission to kill it if it doesn’t work paradoxically keeps it alive because people trust you’re not wasting their time.
- Assign a “huddle keeper”: One senior resident or nurse educator protects the time slot, sends brief reminders, and tracks attendance patterns (not to shame, but to notice issues like “Anesthesia hasn’t attended in two weeks—should we adjust timing?”).
Questions for Participants:
- Huddles, checklists, or escalation rules—what’s worked best for you?
- How have you sustained reliability practices past the three-week mark?
- What’s your experience with “good theater” vs. actual behavior change?
Audience poll: In chat, vote: 1 = huddles, 2 = checklists, 3 = escalation rules. Which has given you the biggest early win?
Cross-Cutting Discussion: The Questions That Keep Coming Up
Let me address a few questions that cut across all our topics, then we’ll open this wide.
“An innovation you wish you had earlier”
Run charts. For fifteen years, I made changes based on intuition and anecdotes. “I think infection rates are better.” “It feels like patients mobilize faster.” I was probably right—but I couldn’t prove it, so I couldn’t scale.
Then I learned to plot a simple run chart: time on X-axis, outcome on Y-axis, median line for baseline. Nothing fancy. Excel, not SPSS.
Example: I charted “days to full weight-bearing after hip fracture fixation.” The baseline median was clear. After implementing a standardized mobilization protocol, the median dropped noticeably. The chart showed the shift visually. I took it to a department meeting. Skeptics couldn’t argue with the trend.
If I’d discovered run charts earlier in my career, I would have scaled effective changes faster and abandoned ineffective ones before wasting everyone’s time.
Panel question: What tool or method do you wish you’d discovered a decade earlier?
“Best starter step for a resource-limited setting?”
Start with workflow mapping before you touch any technology.
Too many clinics install tablet systems only to discover they’ve digitized a broken workflow. Now you have a broken workflow that requires charging cables.
The starter step that works:
- Pick one bottleneck – The place where patients wait longest or staff frustration peaks
- Map current workflow on a single sheet of paper—boxes and arrows, every step the patient takes
- Time each step for 10 patients with a stopwatch (don’t estimate—actually measure)
- Find the stupid steps – There’s always at least one step that makes everyone say “Why do we do that?”
- Eliminate one stupid step – Choose the one with the highest annoyance-to-elimination ratio
Real example: A clinic I advised had patients filling out identical forms twice—once at registration, once when the nurse called them back. The reason? “Because we always have.” No one could remember why it started.
We eliminated the second form. Saved several minutes per patient. Cost: zero. Time investment: one afternoon of observation and discussion.
That single change built enough trust that when we proposed a digital registration system months later, staff agreed immediately. We’d proven we weren’t academics imposing theory—we were colleagues eliminating waste.
Start with a paper map and a stopwatch. Technology comes later, after you’ve fixed the workflow it will be automating.
Panel question: What’s your “starter step” recommendation for teams with limited resources?
“What evidence is good enough to spread a change beyond the pilot?”
I’ve struggled with this because the academic in me wants a randomized controlled trial, but the clinician in me knows patients can’t wait years for publication.
My current framework—you need three things (not one perfect thing, but three good-enough things):
- Safety data showing no new harms – A run chart of adverse events, comparison to your own baseline. This is non-negotiable. Even if your intervention improves efficiency, if there’s any signal of increased complications, you stop and investigate.
- Outcome improvement visible to skeptics – Not necessarily p<0.05, but something anyone can see: “Patients mobilize earlier,” “Staff spend less time on documentation,” “Complications decreased.” If the improvement is real, it shouldn’t require statistical contortions to demonstrate.
- Consensus from people who will implement it – You need key stakeholders—nurses, residents, other consultants—to say “This worked for us, and we’d recommend it.” Their endorsement is evidence.
My threshold: If I have a run chart showing improvement, zero safety signals, and several colleagues saying “This made my work better,” I’m comfortable spreading to the next unit carefully.
I don’t wait for publication. I don’t wait for external validation. I spread it with the same safety stops, the same monitoring, and with the understanding that the next unit might discover it doesn’t work for them—and that’s acceptable.
Perfect evidence takes years. Good-enough evidence takes weeks. In resource-limited settings, we often can’t afford to wait for perfect.
Panel question: Where do you draw the line between “not enough evidence” and “good enough to scale”?
“How do you protect mentoring time—what do you stop doing?”
This might be the most important question, because mentoring is how change spreads, yet it’s first to get crowded out by clinical demands.
What I stopped doing:
- Stopped attending committees that don’t make decisions: I tracked output for several months. Some committees were productive; others spent entire meetings on updates that could have been emails. I resigned from the unproductive ones and freed significant time monthly.
- Stopped seeing patients who should see my colleagues: I screen referrals now. Complex revisions, unusual presentations, medico-legal situations—I refer those. Straightforward cases in healthy patients? I can manage those excellently. I supervise and teach, but don’t need to be the primary surgeon. This freed substantial OR time that I redirected to teaching and simulation.
- Stopped writing lengthy notes when structured templates work: I created templates for my most common cases with dropdown menus and checkboxes for routine documentation. I customize only when the clinical situation requires it. This saves meaningful time daily—time I’ve redirected to direct teaching and case reviews.
The principle: Audit your time for one week. Every hour, note what you did. At week’s end, ask: “Which activities only I can do, and which could be done by someone else, by a template, or not at all?” Then ruthlessly cut or delegate everything in the latter category.
Mentoring doesn’t happen when you find time. It happens when you make time by stopping things that don’t matter.
Panel question: What did you stop doing to make space for mentoring? What’s been hardest to let go?
“Give one example of de-implementation”
The beloved practice I retired: Routine daily post-operative radiographs after uncomplicated ORIF.
For many years, we X-rayed every ORIF patient on post-op day one, even if we have intraop and immediate post op xrays. It was protocol. It was what I was taught. It felt responsible.
Then I examined the data. Research supports this reassessment: A Harvard Medical School study found postoperative radiography after primary TKA was of low clinical utility yet resulted in considerable healthcare costs and unnecessary radiation burden. A UK study of hundreds of total knee replacements found only two patients with significant abnormalities on post-op X-rays, neither requiring further treatment. Research from Brigham and Women’s Hospital found that almost 100% of scans after total knee arthroplasty had no impact on clinical management while costing substantial money and administering unnecessary radiation.
In my own practice review, the yield was similarly low. Meanwhile, we were consuming resources on largely unnecessary imaging, delaying mobilization while patients waited for radiology, and exposing them to radiation with minimal benefit.
How I communicated the change:
- Presented data to my team first – Not “We’re stopping X-rays,” but “Here’s what the literature shows and what our own X-rays have actually revealed”
- Proposed new protocol: X-rays only when clinically indicated—unusual intra-operative findings, concern for malalignment, or patient symptoms. Not “never,” but “when needed”
- Piloted on my own patients first for several months while partners continued routine imaging. Tracked any missed findings. Found none. This gave me data to demonstrate safety.
- Presented department comparison: My patients mobilized earlier on average (no waiting for routine X-ray) with equivalent complication rates. Plus we saved imaging resources that could be redirected.
- Adopted department-wide with continued monitoring. Complication rates remained stable. Mobilization times improved. We redirected some of the saved resources to enhanced physiotherapy.
What replaced it: Enhanced clinical examination skills. We trained residents to recognize signs of component malalignment or other complications through careful physical examination. We maintained high suspicion—if anything felt concerning, we imaged promptly. But “routine” disappeared from our vocabulary.
Communication principle: When you retire a beloved practice, don’t criticize the people who established it. They did it because they cared about safety—the same reason you’re proposing to stop. Frame it as “We’ve learned something new and the evidence has evolved” not “We were wrong.” Data, not judgment. Pilot first, prove safety, then spread.
Panel question: What practice have you retired? How did you overcome resistance? What replaced it?
Synthesis: Pilot Small, Mentor Widely, Document & Share
After twenty years of trying to improve healthcare while delivering care within it, successful transformation comes down to three principles:
1. Pilot small. Don’t redesign the entire hospital. Fix one workflow. Implement one tool. Change one protocol. Prove it works in your unit before asking anyone else to try it. Small pilots fail fast and cheap. Large initiatives fail slow and expensive.
2. Mentor widely. Your innovation dies with you unless you teach others. Spend as much time mentoring as implementing. Protect that time ruthlessly. The change that spreads is the change that has champions in every unit, not just yours.
3. Document and share. Write down what you did, what worked, what didn’t. Share it—at meetings, conferences, blogs, professional networks. Don’t wait for perfect data. Share the run chart, the safety protocol, the inefficient step you eliminated. Other clinicians in other resource-limited settings need to know what you learned.
Digital transformation isn’t about technology. It’s about people, processes, and the patient care they enable. The most sophisticated electronic health record means nothing if your workflow is broken. The simplest communication tool means everything if it helps your team deliver better care.
Start small. Build trust. Measure what matters. Spread responsibly.
Open Discussion: Let’s Learn From Each Other
Now it’s your turn:
For Participants:
- Which of these three questions resonates most with your current work?
- What’s one practice you’re piloting right now?
- What’s your biggest barrier to spreading change?
For audience (in chat):
- Share one “simple tool that saved time” in your setting
- Vote on which reliability practice (huddles/checklists/escalation rules) you want to hear more about
- Drop your questions for the panel—we’ll tackle as many as we can
Remember: We’re all learning together. There’s no perfect answer for resource-limited settings, only better experiments. What worked for me in the Philippines might need adaptation for your context—and what works for you might be exactly what I need to learn next.
Let’s make this a conversation, not a presentation. Who wants to jump in first?