If they all right, that seems to be working. Is that working? Yes. Great. Is this going to be like, kind of an interview type thing? Or how do we
I think we just riff and then it'll give us a summary and bullet points and all of that and then we can structure it. I wish I was doing it yesterday because you had so many good points.
This is where I'm like, I don't remember anything.
Yeah, well, let's start with the top right so the article is going to be on lessons GPT needs to learn healthcare, in order to avoid the challenges that its predecessors like IBM Watson have faced and we'll find a good hook title for it later.
Yeah, maybe it's is it f Pacific is GPT and lessons from Watson or is it? I guess Yeah, the title will say that but it's the intro is there's lots of excitement around GPT chat, and how it can be used. Let's all just take a breath. And remember, that in the space in which we work, there is a lot of environmental factors to consider. And in order to truly deliver value and valuable use cases in clinical discovery, development, and commercial pharma there's a lot to go through and putting something into a magical box and taking the answer is not isn't isn't going to be a shortcut to value because you're going to trip up on data privacy, data security, on meeting regulations, on diligence on all of the things that and like in in the I guess the ecosystem are in the roles of people that touch product and what is said about product environment and life sciences. There are they have a lot of knowledge and understanding of regulations and how those can be applied. And that specialist knowledge is, is not yet likely to be reflected in something like GPT chat. And so maybe you can speed up one small step but to your point yesterday, that doesn't guarantee you going through medical legal review and having the support evidence and confidence that that that that process requires so yeah, it's just it's like a reminder of until you break open the system. Those are the constraints. The constraints are real, they're real. They're real pillars, they are real checkboxes they are real laws. That in which provide a framework of what we can say how we can say it, when we can say it, what we can collect how we can store it, the whole process, and there's like, Yeah,
I think the story can be directed to innovators, right, like Chad GPT is innovative, large language models, chatbots all of them and are truly innovative. But innovation doesn't necessarily mean commercial success or even use case success. If you don't understand what the audience needs,
right, and then, you know, I'm just thinking, you know, with this big announcement today, like technology is never a savior because you have things still go wrong. Like I don't know maybe that's too much trying to tie too much into it, but it doesn't sound like it was a technology it's not gonna have been a technology failure. It's gonna have been a governance and oversight failure in this whole clinical trial. Getting rid of however many patients right but maybe that's too far off topic.
Yeah. Keep it in and see how it fits. Yeah. So like for lessons for TPT specifically. What do they need to focus on to be successful? In a healthcare model, a farmer model.
So there's there's the the implementation of technology is it's not just a procurement purchase, right? It's not your click and buy here and now I'm forever wedded to Google's extra dollar of storage or month or whatever, right? That the whole the scrutiny and evaluation to which you'll be subjected. At is significant from you know, there's real standards to meet in terms of data security, privacy, cyber. document retention, like that, that, that knowledge and the ability to do that, you know, comply with those processes. Isn't? Isn't probably you need knowledgeable people of the space. I don't know how I'm saying this. Like it's not just
so it's a SaaS model issue. If you don't understand how to sell into the vertical and what their concerns are, you're not going to you're not going to get the gist.
Right this this is that whole thing about the SAS model is sounds great for investors and things. But one lesson learned is in healthcare and pharma these are all very nuanced and most of the SAS companies have they don't want to at the end, even Comodo bought you know, a consulting company they bought. They bought people right they bought services because no technology solution is deep enough and broad enough to do all of what needs to be done. And, and, you know, that's, that is a real limitation, because in order to broaden the use case, that kind of use traditionally people in services, which takes away from the SAS model. So the challenge is around identifying a true SAS use case for something like chat GPT and then when you do that, then you get into your whole What are you training on? What is it No, and how do you understand the transparency of what it's giving and how do you also have some level of control on the evolution because if it truly is an AI model, then it is reacting to feedback. And then of course, who gets to give the feedback and whose feedback is more valid than other people's and in deciding the governance of the feedback, how are you ensuring that it's not biased that you're not, you know, that it's it's it's fit for purpose? Maybe your bias against unqualified people who don't know what they're talking about is completely valid. Right, you know, you're not biasing on the basis of sex, race, gender, sexual orientation, but your bias biasing like, I'm not going to listen to anyone that doesn't have I don't know a certain qualification, then what you do is you're probably going to represent the bias that already exists in society. Anyway, you haven't, you haven't addressed the fundamental issue you know, anyway,
yeah. Now when you were with IBM, Watson, I mean, you guys were selling both services and SAS as
to the reality is and the adapt to the bread that goes back to what what could be done. And, yeah, we were selling there was services and fast and some of the services was to set things up, right, because, you know, from the, in the clinical trial side, you needed to do integration and one time kind of work to get it going. And then there were services to a little bit of service, I think, just to to, you know, to run it, so it depends on the discovery side. There were services to to really get to the answer. So the technology could do some of the steps. But actually, it was people and people working with experts within the pharma company that could really get to the Insight meaning the technology was taking away some of the manual effort presenting novel connections, enabling discussion and discovery but it was always a professional that was contextualizing, rationalizing. And confirming, you know, overfitting any of those connections. Right? So, it was a lot less oh, here's an answer. You know, it's here's some, based on what is known here's some possibilities of connections that here's some suggestions of things that could be known right things that might make sense. Of course, they don't all make sense. So in that sense, it was more of a collaboration, more of a let's take the passion of knowledge and scientific connections and the biological biology to genes, genes, proteins, diseases, drugs, and let's see if we can, you know, have new connections.
What's interesting, though, is the industry you know, loves to bash on IBM Watson and say that they bit off more than they can chew, you know, at the same time, chat GPT and other large language models are biting off literally everything at the same time.
And I think that's so what does it really know? And what does it trust because it was just, it's just sort of ingested the, you know, the internet, right and all that was written on the internet in 2021, or however it was trained. Then all of the then all of the good and bad is there. And so, this whole, what is what is, what is your training data? What is your training set? What is the validity of what's in there? In a scientific, you know, Watson, for drug discovery was based off of like a scientific corpus of publications and peer reviewed journals. And of course, you can argue that, oh, the peer review process is, is, you know, bias and things get retracted. But that's the standard, right. And so, but where's the standard of content for the internet? And I think one of the challenges has been there is no standard, where's the standard for communicating? You know, whatever the context is, and some companies, right, somebody like McKinsey and all the decks they've ever done to communicate strategy, corporate valuation, they probably have a fantastically valuable wealth of, but that's not in chat. GPT, right. You're not getting, you know, you're not getting that level of depth. And I don't know what you're getting actually do, you know?
No, and that's one of the issues right, one of the strong points about the IBM Watson partnership on the sponsor side that I, you know, benefited from was Watson had authoritative training, right? We trained you trained on the disease states, you trained on the physician notes of the actual institutions that were using the tool. And then we trained on our clinical trial protocols. So we had no training sets to give us high reliability, high trust, high value response, but almost as important is Watson gave full transparency into the data points that went into the decision. You could drill in as as a clinician as a sponsor. You could drill into each decision and go right into the text and see exactly what it was pulling. And that's the only way that my legal and compliance people would have ever signed off on this. When people talk about bringing large language models into pharma and into health care. They're never getting past the medical legal regulatory review process that is mandatory at every pharma company, if they cannot provide that transparency to source and that clarity on what the training is. It's just the way the system the way the industry operates. Yeah, we talked about will chat GPT replace content creators or agencies. And the short answer is not now because more than 50% of what those agencies go for, is the time to do document preparation, and tagging and diligence and fact checking and validation. In preparation for the med reg legal reuse, I mean, the content itself is maybe 20 to 40% of the cost of any any content piece.
I think it's interesting to think about you know, the thing, the surprising thing to me for the experience of my little play with chat GPP is how how, you know, there's me thinking how would I answer this question? Oh, and the time and struggle, and then you just ask the question, and it just so effortlessly pings you back something that is reasonable right now. I don't know why it thinks that I don't know that it's necessarily true. But the effortlessness in which you get an answer is, if the striking thing that's what I see people responding to, but I think that's a cultural thing about is removing all the friction for everything. From everything. And then if you do that, what does anybody actually what are you really thinking about? So part of the creative process a part of coming up with something new or you know, is there has to be this tension of, I'm stuck, I don't know how to do it. And then you break through that and something like Chappie GPT makes it look like everything's easy. And I think that's, I think that's bad for our kind of culture, and our experience because the ability to wrestle with something, the ability, especially when you're writing or creating content, that's the process. I don't think there is a way around that. And I think that at some point, if you play the whole Gchat GPT people don't get paid for for thinking and writing creative content. And they go out of business. What are what are we left with? Like, do we lose all the creativity? And the, the, I don't know.
But yeah, you wind up with confirmation bias because let's look for things to regress to the mean.
Yeah, and so and also, there's something about the surprising connections, when people link, you know, a work situation, or a strategic problem to something that is more generalizable in life. I don't know that a computer
can really do that. Maybe it can. But I think that again, it's about what are you asking, what's the use case?
What is the value and the benefit of, of, of giving it to a model to come back? So I think
most impressed me about Chachi PT is that if you've got a broad operational knowledge of how different content related things work, you can build really powerful prompts. Right? So for instance, I'm helping somebody who has to write a TED talk about a very dry topic. And they're experts in their field and they know all about this topic and it is it is brutally dropped. I suggested to them that they punch in their topic and asked chat GPT to write the story about the topic, following the story arc of the three city stories. And that that's a that's a format. That's a formal storytelling construct. That's well documented. In English literature and people who do linguistics and and, you know, storytelling is a whole theme of that. And it spit out a very happy little story that read like Goldilocks and the Three Bears, or the three little pigs because that's exactly what that story format that story arc is. And it made that really, really dry topic. More interesting for an audience instead of going into the, you know, who's in the Watson the numbers and this and that it was like, here's the three scenarios through the folks that are having a miserable time with this. Here are the folks that are trying to improve it. And here are the folks that have already achieved success, and they're happy and they get their weekends free to play with their kids instead of working 24/7 365 to do things the old fashioned way. And so the the prompts themselves are important. But understanding the structure of what you're doing, and reaching out into other parallel fields or into adjacent disciplines can really put more power behind, but again, first of all, there's no listing of the prompts.
But there are I think there's companies that are springing up to to do that, where people are making money and if you know that, I think it was Tim Ferriss, or someone was saying that that's where in the short term that you know these companies that are setting up with the prompts, sort of ready to use are the are the ones to follow because they because those are needed, right? You can't get to any value any benefit without the right prompts.
So if you're a marketing agency you're a content creation agency, integrating chat GPT in your marketing tech stack, can really help you. Again, they haven't solved for how you reference or site, you know, accurately we there's a lot of well publicized errors in terms of the accuracy of this stuff. But you could operationalize and you can streamline almost enough, you know, a component content madlibs sort of fashion, a variety of different content. I mean, it's going to be brilliant for doing a B testing or you know, just instantly creating a large volume of content that can be programmed into to AB test and run the best. Also, I see the potential you know, if you if you can train GPT on personas, right then you can write the same content to different audiences very, very quickly. Yeah,
I think that's that prompt thing you know, people like I think that's part of the problem show me you know, write this in the style that will appeal to yeah, that's all kind of seems to my mind to be able to do that. Or at least understand how to do that. I don't know quiet, but I don't know any measures of how good is it isn't doing.
I think people have to build those metrics as they build their solutions. But as as a as a CRO and a business developer. You know, how do you see how do you see managing that intellectual property right like you said McKinsey's, got a ton of content that's proprietary. There are many content vendors in healthcare. There's the entire corpus of clinical trials.gov. There's the entire corpus of PubMed and Medline and then the proprietary databases like Ovid. How do they bring their contents into a large language model? Without giving it away? Right, I mean, Google built itself without paying for content in any way, shape, or form. There was grumblings from the content industry at the beginning, but they all fell in line and learn how to generate revenue through AdWords and advertising and other things like that. Large language models are gonna break that advertising model. How do you how do you sell and protect your content as it authoritative training in a large language model without giving away the form?
It's interesting because in a way, as a CRO, we really in the content business. We're in the web. So when I think about content in the context of a CRO I probably think about the protocol, right, that's the biggest input of content, the output is, you know, the data that we collect, right to to, to develop or to drive the support for trials. So maybe the biggest area of opportunity in terms of use cases is the the sort of development of more fit for purpose protocols, right. And I think that's something that you know, transcellular, a standard protocol design, all this stuff, could do PT chat, help, you know, a specialized version of it. Help
maybe maybe that's a really good use case where it can help to accelerate. better, more fit for purpose, protocols, right. And that's, you know, there's all sorts of reasons. Again, I'm not even sure is that a technology problem where that hasn't happened? Or is
it more of a collaboration and inhuman kind of incentives problem? And I think that's a great discussion to have. As well, I think from what I couldn't help to communicate with patients and I think, I think possibly, yeah, so when we have to do patient facing materials and with the emphasis on diversity and things, but again, it all assumes, you know, some level of specificity and appropriateness of that training and that it's really truly helping you communicate with, you know, the populations in a way in which they want to be communicated with. And so again, you know, again, we come back to, how was it trained, how does it know and is that appropriate? And there's some real you know, you got to be very thoughtful and considered and do you want? Do you want to just outsource that to a machine? I think it's challenging. I think that especially it's a relationship, it's built on trust. There's all these nuances to it. It will need oversight, right? That's the other piece. It's not bypassing the step. Maybe you're you're speeding up a small step as if you can get comfortable with this is an appropriate tool to do that. And I don't know that we're, we're really there yet.
Well, I think that's the key right? Chachi PT never claimed to do 80% of what people say it can do. But there's a lot of perception in the marketplace. That happened
in the Watson days too, right? That it sort of said, there was the this is what we said we can do and then this is what everyone assumed and wanted it to do. For me looking at chat GPT and my experience with it. I'm like, Oh, this actually is this is what people were expecting. When we used to talk about what sort of what it could do like sort of magically here's here's an answer and and all this sort of pros and evidence behind it. And that's not what what was happening. But here we are 11 years or maybe more from must be more than that from that Jeopardy show. But unless there's something that looks kind of similar, right,
yeah. But
not only like that, it's they might know the answer, but it it's also, again, the experience of chat. GPT is it's sort of structured and thoughtful, but really isn't really thoughtful. That that I think is nuanced. Right? I think that's appears to be right. It has the elements of structure and brevity and succinctness that make it feel like it's distilled to old data and considered everything. But the reality could be really different. And I think that's the risk, right?
And I think that that's important, right? I mean, first of all, the branding the brand gets diluted when whether it's chat GPT or Google's lambda or Bard or any of the other systems and are being chat, whether they allow themselves to be defined by the wild blue sky, generic public perception or whether they stay true because the potential to oversell and under deliver grades with time Chad GPT. I mean, it definitely when it launched, it opened a Pandora's Box. We're not going back the paradigm has definitely changed. Whether anybody wants to admit it or not. We're not going back to pre GPT days in terms of language models and applications. But at the same time, you can't just claim that it's going to do all these things that it was never intended to do. For all intents and purposes. Chat GPT was meant to pass the Turing test, which it's done right? It can converse with a human as fluently as another human not 100% of the time, but it's incredibly huge percentage of time. It can come across as factual, but it's it's never advertised that it was going to be factual based on the training sets. Yeah, I've worked with other AI vendors, you know, I'll call out human AI right. And they trained in using AI and I believe they are now a chat GPT partner as well. But what are the strengths of their technology is it's not just a chatbot that will pass the Turing test. You can connect a tabular data source or a data structure or a database to human AI, and it understands the structured contents without training or trains itself. It pulls the relevant information. So in the context of health care if I wanted to talk about the incidence and prevalence of a disease in the context of my content system running off of human AI, it'll read into all the seer databases it'll under understand it'll identify the right columns and it will generate the correct statistics for you. If you say what was the incidence and prevalence of CML in 1989. It will accurately find that data and surface it for you alongside other descriptive text from long form content, such as PubMed, and it will give you a comprehensive answer. It doesn't spin a story out of it. Nobody's ever going to mistake that for a human, it's not going to pass the Turing test, but it does do a good job of presenting the facts. Yeah,
yeah, that sort of querying. You know, you've got some structured data. And you can use a natural language query to bring that data to life meaning Yeah, show me and that that's, that's not new that's been around again, first sort of 16 using different tools and add ons and stuff. So I think I think the adoption and the integration into more common workplace tools is is gets to the scale. Right. And it comes back to the commercial model and value piece. That is
that the issue here it's been around for so long that it people ignored it. Nobody believed that any of this was worth paying attention to you and I have both been involved. In these types of chatbots even from GPT one and the things that could barely even, you know, that were barely a step above Microsoft's Clippy. Up to today, right? Did the lightning strike of GPT coming out and passing the Turing Test suddenly, wake everybody up this this slow burn, became a forest fire out of nowhere, but really, it wasn't out of nowhere? It's been
a great way, right? Well, it never is. I mean, these things never are, right. They're all incremental, incremental, until suddenly there's this kind of cultural moment and awakening and you just mentioned the Clippy thing. And I just saw I don't know if it was a meme or cartoon. I don't actually don't know the definition of a meme anyway. It was a cartoon of I think it was shaggy or who's the guy in? Scooby Doo? Scooby Doo. Thank you. And you know, they always like unveil the villain or the the end. Well, they had the, you know, he's he's taken the hood off. And it's the Clippy guy that's underneath. It was Chad GPT and they take the hood off and underneath it's the Clippy and I thought that sorry, when you just mentioned that.
That made me think of that. Cuz you're right, these things all build on each other. And that's
in some ways, COVID like, oh, everything can happen so fast and change and then it it's like an elastic band, that sort of consciousness, the tension and the release of tension. And I think none of none of that makes it any easier to make any real systematic changes. To the process or pharmaceutical discovery development and commercial commercialization right and how the processes and practices of what what can be done what needs to be done from a regulatory legal and ethical perspective, right, that hasn't changed. And so that is why I think it will, you know, just takes time for a tool to come to life and deliver value in the context of those constraints. Because Because all these people need to understand, right, and that was, you know, back to the Watson days. Nobody had any