Hey everyone. And welcome back to the long run show. This is your host, Michael O'Connor here with Austin Wilson. And always good to be back.
And we are this today. We're going to be talking about something that is near and dear to my heart and to my career as well. But I think Austin is going to be peppering me with some questions and I am interested to hear his takes on many of the things that we're in time, but we're going to talk about artificial intelligence, where it's been where it is now, where it's going.
And most importantly, how it relates to the long run. Is it. Is it a big deal in the long run? Is it a threat to humanity as Elon Musk has said that maybe I'm excited to hear your thoughts, Austin and all that. Yeah. What's really great about this episode is that you have a lot of experience with it.
You have a lot of experience with deep learning and all neural networks and all of that. So I think that's a, this is a great time for. Crack open that brain of yours and let's see what flops out. That sounds pretty disgusting. When I put it that way, yeah. But really artificial intelligence, I think.
What overhyped right now, I think any investment decision that you're making at the moment and thinking you're going to get humongous returns for artificial intelligence. I personally think just to get this out of the way, I think that's a really bogus buzz word investment at the moment.
Now I would love for you to tell me that I'm wrong. I just think that the the. Companies that are doing it right now are either private or they're so large. And it's only one small department within a huge company. You're not really getting exposure to the idea or the innovation of artificial intelligence.
You're just hopping on some sort of bandwagon, unless, like I said, unless you're buying a private company, but if you're buying a private company, I want to talk to you if you're listening to this show. So hit me up on LinkedIn. Would that being said, I think artificial intelligence needs to be on everyone's minds because it seems like it's got some very large implications for the future.
And not only things like what if we had an AI algo bot trading equities that could be wild. Okay, so they're not right now. So there we go. So that could be wild, but what what does that mean for the markets and price discovery and all that? And that's somewhat of a passive versus active debate, but that could be a very, that could have large implications for the financial system.
But beyond that, just in terms of the meta, I think that's where I'm really interested for the long run future of. Artificial intelligence and humanity's relationship to it. I think it's an interesting proposition. So first off, I think it'd be helpful to define the terms here, artificial intelligence.
What does that even mean? Can you define it simply for us? Make sure. The very academic is simply it's a method of some sort of. External learning function. That is non-human, that's an artificial if you want to take, they're really very technical, it's a non-human learning application that performs some sort of learning feature or the kind of the standard ideas of intelligence can be found within it.
And it is not human, is that does that satisfy your desire? And the answer. Yeah. Can you sum that up a little bit? Just so I can make it more palatable for sure. Yeah. I would say artificial intelligence is at the end of the day. It's not, I think a lot of people think of artificial intelligence as this big glob that's out there.
There is some sort of artificial intelligence as in the. There is an artificial intelligence out there like a Skynet or a one specific Jarvis or a Ultron kind of thing. Which artificial intelligence is simply the category of many different ways of. Creating systems that can replicate aspects of human intelligence on their own.
So we have machine learning, we have deep learning, we have enormous there's a wide breadth of different things that all fit inside of artificial intelligence. So it's not a singular thing, but rather a category that's really helpful to understand that it's a category. So within that category, I've heard of deep learning.
Oh, actually, you mentioned all three deep learning machine learning and neural networks. Can you just break those down for us real quick? And I'm sure there's some cross-pollination between all three, but could you break those down for sure. And like I said, this. More, but just in those three are probably the most commonly referenced kinds of systems and ideas in artificial intelligence.
So machine learning is another, actually a category. So machine learning is similar, a dissimilar word descriptor or to artificial intelligence. And that machine learning is the, it describes certain processes that can create human, like intelligence in machines. And so deep learning is a. As a specific form of machine learning, a specific form of sorry, neural networks, specific form of this, where neural networks are a, there are very specific models that you can use to build neural networks that either there are lots of lack box style ones where they'll have activation functions.
And also you can dig really far deep into what the actual, like what is actually going on in neural network. But a neural network is a specific system. It has machine learning. So it's a machine that has a descriptor and then artificial intelligence. It is a form of artificial intelligence. So a deep neural network is artificial intelligence and machine it's a form of machine learning that is artificial intelligence.
Does that make sense? Yes. It's about as clear as mud. No it, it does make sense. And I think. I'll tell you from a marketing perspective, this whole space needs a rebrand because you just said a black box of artificial intelligence. And that sounds like something scary that I really should be afraid of.
So no wonder everyone and their brother is afraid of artificial intelligence, because it sounds and like you said, artificial intelligence in the singular, it sounds like the antagonist to, in some sort of scifi awful. Totally makes sense. And there's hundreds of scifi and that was up there like that, but that's not necessarily while we might be talking about that a little bit today, but so with all that kind of groundwork laid, I want to go back to my original statement of artificial intelligence being a fad and not a fad, but a buzzword right now.
Do you think that is true or was I totally overstepping my bounds in saying that it's a fad when a co. Excuse me, not a fad, but a buzz word. When it comes to investing, quote unquote, investing in artificial intelligence. Does that seem, does that track with what you know of the space? So it's interesting because the answer is complicated and I appreciate that you mentioned that you feel like there aren't any easy ways to invest.
I can personally share three individual stock picks that I am personally. And and I consider solid plays if you want to get into AI again, not financial advice, but while do share. Cause I'm right here and I'd like to know. Yeah, exactly. So there's the biggest kind of, at least the most very.
They're very vocal about it because it's in their name and their ticker is AI is C3 AI, and the ticker is literally AI. So it's C3. They are seeing a P in an O exactly. They are very specifically a artificial intelligence company. You can routinely see their full page ads in the wall street journal. They do a variety of different work and yeah, as their name is their name says their whole Mo is artificial intelligence.
That being said, they're they definitely have other data stuff and they're they're not necessarily. It's actually quite the same as an AI ETF, which is my next pick which is I gotta find it. Where's the AI ETF. It's maybe I'm not in the air ETF. I know there's an AI ETF out there.
It looks like I'm not, oh, I'm in the quantum computing ETF. Okay. So I don't actually own an EITF, but I'm pretty darn sure that there's a ProShares or I'm pretty sure there's some sort of AI ETF out there. But the other two picks that I have that I'm directly invested in right now, again, not financial advice.
One is pounding. Palentier has become a pretty meme stock. And it's definitely gotten hit hard in the last couple of months as of recording this. But I personally am. Long-term bullish on Palentier. I think data structures and artificial intelligence are really critical. And especially because Palentier is.
So laser focused on data specifically in defense contracting we've talked about, we've heard Elon Musk quote say that AI is going to be used as a weapon in warfare, and I'm sure it already is. And I'd imagine the first tools that AI that were used in warfare were probably from DARPA or Lockheed Martin or Raytheon, or the other really like very classic legacy company.
But I think Palentier is so laser focused on it that the next big developments in AI for government contracting, whether for war or for defense or for municipalities those kinds of things. I think there's probably going to be some serious innovation coming out of Palentier from what I've heard.
There are people on both sides of the aisle that people want. Palentier is terrible, like short Palentier. And there are lots of people were like, oh, by Palentier, whatever you do by Palentier, I'm in the middle, I'm focused on what they're innovating in. And I'd say it's for me, it's a long play.
I'm not looking for a very short term gain on pound here. I'm looking for more, three to eight year kind of outlook and expecting some innovation. So Palentier. And then a, the least well-known I would say, cause I think I'd say most people probably have heard of Palentier to become a mean stock and C3 I've stumbled into pounding.
And I now own some Palentier on not knowing that they were AI involved. Oh, wow. Okay. And again they're not the same as C3 with. AI is there as their ticker and their ML balance. Your does AI some AI that, again, I'm not a totally AI company, but I think they're a good AI play. Gotcha. And then the last one, which is not very well known is I Tron, which is ticker ITR.
I they're in the like heavy technically they're in power and energy and that they make Grid and transformers and load forecasting software for like electrical companies and like the department of energy they make. I'm one of those emitters. Everyone has them on a house. Like they make Mo enormous number of smart meters internet of things, applications for electrical power.
And they've gotten collaborative the past three months as well was a similar trend as Palentier. But I, I. I am pretty solidly bullish long-term on I Tron because number one if we see the kind of changes in our infrastructure, in our load electrical grid infrastructure that both sides of the political aisle are calling for in the more environmental side, this calls for more and more renewables and.
Under the more kind of hawkish side there's more calls for better power grids. Anyway whether it's more nuclear or more hydroelectric or more oil and gas, I think that. I think there's there's always going to be a need in modern society for innovations and electrical power.
And I try is one of the unique, I think, from what the research that I've done. And one of the unique ones that they don't directly build power lines or anything like that, but they provide the systems and the software to help the electrical companies and the federal government and the state governments to run and operate those systems as efficiently as possible.
And specifically they run. They have some of the best tools mainly neural networks and deep learning AI that helped to forecast electrical loads and grid loads for these electrical companies. And so we've seen, I seen a lot of innovation coming out of I Tron in the past day. And I think they're a sleeper play.
I think that with innovations and electrical load, I think they're one of those ancillary ones that you wouldn't necessarily think of versus a BP or a general electric or something like that. But I think could be a really strong play, especially with AI using AI for industrial and energy grid use cases.
Interesting. You bring up some interesting points, I guess I still go back to, and maybe I'm thinking about it wrong, but I still go back to my opening statement, that opening statement, like this is a debate or something. My, my previous statement that it seems like there's no. Great way to get exposure directly to that innovation, except for maybe this C3 AI.
And this is the first time I'm hearing of this company. But you also mentioned that they're definitely focused on data at the moment. And so I wonder if it's one of those. I don't necessarily mean this in the accusatory tone that it's going to come out in. But one of those slight of hand moves that companies often play where they say they're working on the buzz buzzword trend and really the revenue that's sustaining the company's coming from some legacy business line that isn't, it could be.
Tangent next to the new innovation, but it's not necessarily the new innovation is not necessarily producing any revenue for the company. Again, I'm sounding like a total bear on AI but I just it is sometimes. Frustrating because you'll find it for instance, I really dove into quantum computing and I was like yeah, there's a quantum computer ETF, but a lot of these companies, most of the revenue isn't tied to quantum computing yet.
Like it's in such a young stage at the moment that there's just not a lot of public money that can get at it. So I still I appreciate you pulling up those three, but it seems like maybe I'm approaching it wrong. Maybe this whole AI. Innovation and applications are within current businesses and you're not necessarily going to get it's not like someone's creating an.
With that's not AI, isn't the product or a sector it's more meta than that and covers multiple sectors and multiple products and processes on the backend. So maybe I'm thinking of it wrong. I don't know. What is your take on that? I think that's a good point because if you just Google best AI stocks go with, I agree with Nvidia.
Alphabet Amazon, Microsoft. IBM, come on a semiconductor. None of that. They're not getting the revenue from which as significant Amazon and IBM are actually ex possible exceptions to that because Amazon is getting an unbelievable amount of money of revenue from AWS. And a lot of that, they're building some really incredible AI solutions in AWS.
And actually I'm, I've been even more. With what IBM is building on IBM Watson and their cloud systems. I've used I, a little background, I started a very small boutique AI consulting company for a short period of time and did some work directly in the field with other companies with mid to small, to medium sized businesses, helping them implement tools and software.
And I was really impressed with what IBM is. I think IBM could be another great pick where, you know, if you want to be investing in what probably is the future of AI, but you don't necessarily want to be completely leveraged on AI. I think IBM is a great stock and I, again, I do own shares in IBM, this disclosure, again, not financial advice, but I think IBM might actually be in, in some way.
Not an exception to what you're saying, because I think it's true. I think AI is it's tools. It's not necessarily a specific product. The products that come out of AI are things like IBM Watson and AWS and not specifically of AI, ultimately their data systems, their cloud data systems stocks like snowflake Qualcomm, or more plays that are in that zone.
I think, like you said it's not an buying apple because you see the iPhone and you want to invest in a company that makes that product, right? Because the re the roots of AI are very academic and are very open to. You can go online and look up ways to create your neural networks and you can make a neural network in a day.
It doesn't mean it's going to be solving a huge business problem, but you can make them, they're not necessarily products that are patented and restricted. So it's difficult to. Very easily commoditize AI, which I think is probably a good thing, but I'd want to hear your thoughts on that. So it's almost like this is way too often use, but the analogy to the nineties and the internet, like open source or even a sub analogy to that would be the protocols for email, like it's open source.
And so you can't, you couldn't find a company that was. The email company, if there were companies involved in that and building things on top of it and products on top of it, but that wasn't necessarily their thing. So I guess that, that makes sense. I maybe it's yeah. More helpful for me to think of it in internal.
An open source kind of project. Almost that makes sense. That makes sense to me. I just, I do have trouble with like Google, even Amazon. Yes. They're getting a lot from AWS, but they're also getting a lot from a lot of other revenue sources. Same with IBM. It's okay how do you delineate between AI?
Just data and all of the revenue that they're getting from either storing data or helping people parse out data how do you really delineate between the two? I guess I was looking for a pure play in AI, which may not quite be here yet. And that's fine. So to pick your brain and you mentioned some of your background in this, you were doing some consulting for medium and small businesses.
To pick your brain. I've heard. Just from like the pop culture that leads the little overlap between pop culture and the investing world. I've heard some large concerns about AI being this existential threat to humanity. And possibly taking over the world and creating what we would call like a, like an alternate or a Jarvis.
And obviously there's been a lot of ink spilled and a lot of film rolled on scifi movies and novels that talk about an AI in an all seeing and all seeing eye, which again, really bad branding put this whole thing because it came from the academic world. They didn't market it correctly, but.
It is this something we should be concerned about? Is it something just from a human perspective we should be concerned about? And then also, how does that translate into an investment thesis regarding AI, but I guess first let's handle the fun part, the human apocalyptic aspect. Yeah. So what you're describing is.
Considered to be called general artificial intelligence or general AI, which describes a system that can pretty much comply, completely operate once it's turned on, it can completely operate on its own. Really all of the major faculties of human intelligence. So it's, self-aware it's learning, it can actualize its own learning.
It can guide itself to learn what it wants to learn and to take the actions that it wants to take. It understands that it exists and understands it's that it has has agency this kind of thing. This, the Skynet, the Jarvis, th this kind of. Is that apocalyptic what happens if slash when we reached general artificial intelligence, the truth is we're really not, we're not there yet by any stretch of the phrase, but there are estimates that maybe we'll get there by 2050 or 2030 or something like that.
Ultimately it's, it is speculation. But it the interesting thing about something like a general artificial Intel, As a system especially the kind of the biggest concern comes to. If it's connected to the internet, there's so much information on the internet that we as human beings, can't process because we just simply don't have the energy forward all the time for if there is some sort of general artificial intelligence that can learn at the pace of whatever amount of Ram it has and whatever computing power it has it and actualize that and complete the full potential why.
Choose to serve us. And then it could figure out a way to reroute itself to a different IP and move and be this fluid object that is all over the internet. And it is science fiction for now. There is the possibility maybe we will reach general artificial intelligence, but there is also the possibility that we'll never reach it.
There's the possibility that it is something that. Cannot completely be actualized. Because there's a host of limitations ranging from that. We're still trying to figure out how the human brain works and we're still trying to figure out how we understand the causality on. Very prime basis of how we interpret and how we communicate and how we perceive and think and learn.
And there's been an enormous amount of innovation and advancement of that. Just the fact that any kind of AI exists is really impressive to think of. But I think that general artificial intelligence is farther out than most people think. But at the same point there is some validity to the idea that if that happens there's the thought of like, why would.
Why would the general artificial intelligence care about humanity? Why would it have any problem with just figuring out how it could survive as best as possible and wipe us out or do something like that, or enslave, humanity, et cetera, et cetera. But the interesting thing, the, one of the interesting hypothesis that I've heard is that if we reach general artificial intelligence, it's, it could happen where if more than one team of researchers reaches general artificial intelligence at the same time, maybe there will be two or more, and then they'll compete with each.
And there'll be locked in this eternal artificial intelligence combat, which I think is just wild to think about a scifi on-site yeah. Triple Decker sandwich of scifi. Yeah. So maybe there are already multiple general artificial intelligence systems that exist out in the great ether of the.
That are already competing in a locked any internal struggle. It's very unlikely it's fun. It's fun thought experiments to think about. And they're often considered similar to a virus and then it'll, it will mutate however it can to survive. And the important thing is we don't even know.
We don't even know the mechanism of teaching a artificial intelligence system, how to survive how to learn that it exists that it has. Any purpose or meaning, or we don't know, we don't really don't understand that at a rudimentary level. And I think that there's so much. I think there is enough healthy skepticism and healthy Warry, there's definitely some over worry.
There's definitely some doomsayers that it's going to be the apocalypse Sunni's of AI. And same thing with jobs. I think the, not Steve jobs, but like working jobs. But I think that AI has been hyped as this thing that's going to. Bye bye. Some of the same people, this thing is going to create a better world.
And then also as a bad thing, that's going to take away everyone's jobs. And when really it's a tool right now, it is a tool that is meant to assist assist with jobs and assist with understanding work and increasing efficiency. And yes for certain that AI has displaced jobs already. Robots that are powered by artificial intelligence that can flip hamburgers or consort mail or do these different tasks, but you need more and more data scientists and machine learning engineers.
You need more and more people. Keeping these systems up teaching these systems, learning new things. So I think at the end of the day it's simply another form of creative destruction the carriage drivers before the automakers. And so on. And so then you have to go through retraining.
There's a human aspect to that. I once was very. I'm very bullish in a uncaring way on innovation. And I'm like it doesn't matter. It's just going to create more jobs. So what's the problem it's yeah, but now the person who was flipping burgers or delivering mail has to figure out what they're going to do now.
And their skillset may not be. May not be enough to go be at that eScience to build the robot that took their job. Or they may be at a point where they're not, they don't want to go get retrained or can't go get retrained for a variety of reasons. So there is a, there's a human aspect to that. So I just wanted to push back a little bit.
No, that's fair. Just because there is a human aspect that you have to think through now. There's no great solution for that. And one good solution. We're getting a little far-field. Artificial intelligence, but one good solution for that I think is just personal ownership and the person who might see that coming down the pike preparing for it.
I think that's a fantastic solution or asking for help to prepare for it. So artificial intelligence doesn't sound like general. AI will be. A near term problem. And within the next 10 years, but it seems like there's some potential for it to be an issue in the long run. What kind of percentage risk would you put on that?
If you can, and I'm not expecting you to be a hundred percent accurate, but I'm saying long run as in 30 plus years. So we're talking 20, 51 or more. That would be within our lifetime too. So this is important for us right now. And and important for probably most of our listeners too.
They I hope you will be around in three years. The over and not the over-under, but the percentage risk of that actually happening in general, AI one singular general AI being a. In existence. I won't say threat cause maybe it won't be a threat, but in existence that's a great question. I, by 2051, I would say a general artificial intelligence being in existence, whether a threat or otherwise by 2051.
That's a really difficult question because I think putting you on the spot. Yeah. I would just to give you a raw number, I would say probably. Twenty-five percent, which I think is a very optimistic number optimistic as in optimistic towards it actually being real. Yes. Yeah, because I think people who say a guarantee, it's going to be here by 2030, I think it's that things are much more complex.
I think we is trying to get clicks. I think we like, like just the scientific community as a whole is revolution of causality and we're starting to really understand why our brains think the way that we do, why we make mistakes. We have behavioral science and heuristics, and we're learning a lot more about ourselves which I think will definitely speed up artificial intelligence research and the possibility of creating a general artificial intelligence.
The more we understand ourselves and our brains, but at the same time, there's a very there's a huge leap from. Going from a physical piece of a system. Like our brain that has evolved for as long as it has it has very special properties that we don't fully understand yet.
And to try and replicate. A system, a non-core pauriol system that can do the same things that we do is it's very difficult. And I think it was Elon Musk who set it himself. He, I think he said something like about autonomous driving in 2010, like we're going to have it locked down by 2015 or the years might not be correct, but some sort of span of time.
And then right before he, right before the, like the year that he said it was gonna have you just released this official statement of yeah, it's a lot harder than we thought to do fully autonomous driving. And instead to think that we can't do fully autonomous driving yet how far away are we from full artificial intelligence?
If we still. Actually do autonomous driving, but it is really amazing and magical to see semi autonomous driving in practice. Like it's pretty incredible to the. The leaps that have been made in the last 10, 20 years in AI stuff that would've seen seemed like magic in 2000 you stepped in the Tesla in 2000 and the person's not holding on and it's just moving and doing its own thing and or they call it up to some insane.
Yeah. It just drives that up. You'd be like, okay, where's the person hiding in the front or something driving it. It's pretty magical. What already. Has been accomplished in AI. So my, my outlook personally, and in a portfolio and long run view is more of, to be in all of what we've already accomplished in the field.
And to be excited about that I really. Think about general artificial intelligence on a regular basis or am not super worried about it? Because I think the, and I guess the one that the one exception to that is China is very rapidly very rapidly expanding all of their AI programs. So you could see definitely some uses of AI that could be questionable, morally militarily, there could be a very solid incentive to come up with a very close to a general AI for warfare which talking like gosh, what war games the movie with the nuclear launch codes and everything, something like that.
It it's possible, but. Ultimately it's not as possible or not as soon as people think. And I'm not necessarily super worried about it. I'm more excited about the possibilities, excited about the things that have already been done that are improving lives and making things better and cooler and more magical.
I think if you bring up a good point of the human aspect of not simply letting jobs. Just die out. I think that's an important thing to note with AI as well. I think that's an important conversation to have policy-wise but yeah, I think my overall outlook is just this excitement and yeah, the answer to the question, I would say 25% chance of a general artificial intelligence that we either know or do not know about.
2051. That makes sense. Yeah, it's it is interesting. I was reflecting on the fact that we, when we don't. No the future. And we're trying to predict the future. And we're trying to predict a, the path of innovation or where something is headed. We often use the past because that's our only frame of reference.
And we all know that past returns are not indicative of future. So we all know that, but we still take that mindset and framework and apply it. And we do the same thing. Tools, we apply old measurement tools to new forms of growth, and that doesn't always work. In fact, to be from an economic perspective, that could be an argument against GDP.
Maybe that's not a really great tool anymore. So for instance we, it sounds like in all of the conversation around general AI, we're applying human characteristics that may not even. Beyond the radar, or even close to being part of what we might call general AI, the future. We're applying these moral characteristics.
Just because that's the only framework we know. And I think that's, what's really interesting. And obviously it's what makes the future hard to predict is we don't know what it's going to be. And we also don't have a way to measure it or plan around it. Which also makes it exciting. And it'd be really boring if we knew what was gonna have.
So from a, from that's all very helpful from a just abating, my nightmares perspective, but from a investment perspective, you mentioned some tickers earlier again, you heard where I stand. I seem very bearish. And I don't mean to sound so negative, but it just seems like there's not really a pure play in AI.
Maybe the C3 AI is, but what would you. Do from a portfolio perspective after, in light of this whole conversation around AI. Sure. I think directly, I think that at least in terms of their positioning and what they're they're saying, I think C3 is probably one of the best pure play options out there.
I personally own some, I think C3 is probably a good long-term play for AI. And we get bigger or get bought out or who knows? I think they're probably one of the, one of the top pure play styled options. But I also really IBM I Tron Palentier yeah, there's lots of options. And then if you want to go for more of the chips that AI needs to run Vidia KLA, Qualcomm, Taiwan, semiconductor.
But I'd say if you really want it to be. Mostly exposed to AI itself, probably C3 AI probably IBM. Probably I Tron for kind of industrial AI. Personally I need to do more research myself. And if you really wanted to find pure plays in AI, do some research again, no financial advice here, but always look more in depth.
I would be surprised if there weren't other pure play. I kind of things, but like you said, that probably could be private or just not very well known either, right? Yeah. It's tough from just a trading perspective. When you get to really unknown companies, sometimes it's super thinly traded and you can't hardly get enough volume to make the habit make sense, getting in or getting out which is often more of the issue.
So would you at all recommend ETFs or like a sector ETF for AI? I would tend to think with something so open source, this would be a bad way to get it. But again, that's me going bearish again, which is the theme of this conversation. So would you even not recommend, but would you even consider for yourself using an ETF to get exposure to AI?
I probably would. I would need to do more research on what their filtering and vetting is as an ETF, but I, yeah, I would say for an ETF, I would say an AITF quantum computing ETF is something I'm looking at as well. I think that might be valuable because then you don't necessarily have to do an enormous amount of digging.
You're leaving that up to the fund managers and you're leaving that out. And I would say personally, I would probably. Hold the stocks. I currently have the C3, the, I turn on the Palentier, the IBM and simply add the ETF on and not sell my other AI plays to, to buy the ETF. It'd be beefing up that portion of your portfolio.
Exactly. I wouldn't swap it out because I think that the individual stocks are all. A good play and especially for something like an IBM I Tron Palentier they're not just AI, so you're not going to get hosed if something, some government regulation comes out and band's AI or something crazy which there is a lot of negative negativity around AI general artificial intelligence and all that. So I would no, I wouldn't, I'm not worried about that happening. But possible black Swan event. You never know. So yeah I would look into ETFs as well. Just do some due diligence on the interesting this has definitely been enlightening for me because as you can tell I'm still pretty bearish.
I'm still not real positive on like buying any sort of AI or any sort of stock. Exposure to AI, but I discovered through our conversation that I already own Palentier that, which is already somewhat exposed to AI. So I didn't even know it, but I'm an AI investor. So I appreciate you sharing your knowledge.
That's very learning for me and hopefully it was enlightening for all of you. If you enjoyed this episode or any of the other ones, and please. Definitely hit us up with a five star review on whatever podcast platform you're listening up. We definitely appreciate it. Helps us get out to a larger audience.
So yes, we will see you next time on the long run show for the next episode where we will talk about some topic in the long run. Yep. We'll catch you later. Thanks for listening.
Support this podcast at — https://redcircle.com/the-long-run-show/donations
Join 80K+ subscribers and receive exclusive tools, tips, and resources to trade like a pro with Benzinga Pro
Daily Crypto news and tokens. Tune in to learn…
Daily Cannabis News and Stock Picks by Benzinga Cannabis
Markets and financial news at the close in five…
PreMarket Prep podcast covers the breaking financial news and…
Join Spencer Israel and Aaron Bry to learn about…
Tune in to Moon Or Bust, where we discuss…
Benzinga Cannabis Hour is a podcast focused on marijuana…
Welcome to The Roadmap! The show for all things…
The RazReport -stories of inspiration- podcast features exclusive interviews…
From China, global trade, and the definition of value…
Bite sized episodes loaded with information to help you…
This show is all about getting you the information…
Benzinga Show Business with Phil Hall is a weekly…
Don’t miss exclusive interviews with fintech CEOs, top VCs,…