Jamie: [00:00:05] Hello everyone, and a very warm welcome to another episode of Hedge Fund Huddle. As usual, I'm your host, Jamie MacDonald, and today we are talking about quants. What on earth are quants? I hear you cry. Well, in this case, quant stands for quantitative trading and quants are those individuals trying to make money from spotting patterns and correlations using complex math models and algorithms. Now, if you don't already like or follow us, please do so now. It's how you hear about our latest news and the next episode. And also, we want to hear from you. So, feel free to write to us or leave a review. Now, as you know, these episodes aim to demystify the world of hedge funds by talking to experts. And today we do exactly that. I'm lucky enough to be joined by two such experts Nirav Shah, founding partner at Versor Investments, and Tarun Sanghi, a senior quant at StarMine, which is of course, yet another London Stock Exchange Group business. Tarun and Nirav, welcome to the show.
Nirav: [00:01:04] Thank you. Jamie.
Tarun: [00:01:05] Thanks Jamie.
Jamie: [00:01:06] So, Nirav, there's a lot going on there already. There's lots of words to try and explain, but before we delve into vocab and unpacking this topic, perhaps you could just give us a bit of an insight into your own career and when and how you started Versor.
Nirav: [00:01:19] Sure. Thank you for having me. I am super excited to be part of this podcast. I have two decades of investment experience. I'm a founding partner at Versor Investments, which is a New York based systematic investment manager. Versor was set up ten years ago to focus on uncorrelated, systematic strategies using large amounts of data, advanced statistical techniques, and a hypothesis driven framework. My background is a combination of quantitative research, finance, and engineering. I've worked on various parts of the investment process, ranging from research to portfolio construction and trading. Prior to Versor, I was a founder of a consulting firm focused on quantitative research. Prior to that, I was based in New York and Chicago, working as quantitative researcher in the hedge fund space with different hedge funds. Summarizing, over the last 18 years, I've had tremendous opportunities to work on various areas of systematic investment process, including asset allocation, risk management, portfolio management, and building scalable and trading scalable trading and proprietary research systems.
Jamie: [00:02:26] Thank you very much for that introduction. Tarun let's turn it over to you. Tell us a bit about yourself and about StarMine. I do remember StarMine from my days on the equity research side of UBS. It used to tell all the analysts on our team whether they were any good at their job, basically. But if you could tell us a bit more about StarMine and again, how quants use it to make money in their models.
Tarun: [00:02:47] So, at StarMine we combine classic quant with data science and ML and AI to build clear box stock selection and risk mitigation models. And as you said we are best known for our ranking of sell-side analysts and our development of smart estimates, and these smart estimates are used as input to a lot of our stock selection or alpha models. Our most famous model is the analyst revisions model which finished 20 years in 2021. And we take a lot of pride in it. It's one of the few models which I have personally seen to work so well for such a long time. For my own background I'm an engineer by training. I joined quantitative finance right after my engineering PhD from University of Illinois at Urbana-Champaign. Prior to StarMine, I was quantitative strategist and research analyst for two small hedge funds. Both the places the flagship product were equity focused, long, short market neutral portfolios. And one of the perks of being at small funds is you get to work on what I call the three most important pillars of active portfolio management. Like, we build stock selection models, we build risk models, and then we combine the two of them together to manage our portfolios. So, that's a bit about me. And I'm equally excited to be on this podcast with you and Nirav.
Jamie: [00:04:09] So Tarun, before we turn back to Nirav which we'll do in a second. So, for example, if I am developing a quantitative trading model and I want to do it based off of which analysts are kind of on a hot streak, so to speak, which guys are out there getting it right? That is, I would get in touch with you, and you would basically give me access to the models that would allow me to track the correlation between which analysts are getting it right. Is that the sort of thing you might expect?
Tarun: [00:04:38] So we don't share the model, we rank the analysts, we have our proprietary approach. So, the entire framework back in the day was used by a lot of these brokerage houses to rate their own analysts so we came up with this statistically rigorous framework which looks at the historical performance of the analyst. There are a lot of biases that analysts have. So even when you have a very high revision it's very likely that most analysts revise it slowly. Nobody wants to stick their neck out and come up with that big revision all at once. You have the reputational risk. And what if you are wrong? So, they do those things in increments. So, we identified all those biases through data and came up with a process which would allow us to rate them. And again, the one bias was how they disseminate information. The other bias is what is called the coverage universe. As an analyst especially when you are starting your career, you don't get to choose the stocks that you are going to cover. Versus somebody who is in the thick of it they would want to cover the stocks, which gives them the maximum reward. So, if you're comparing an analyst who didn't get to choose his coverage universe with somebody who could decide what he wants to cover it's not a fair comparison because the opportunity set was different. So, we took into account these different biases and these different constraints that analysts have when they cover a universe and provide their estimates. And we combined all of that to come up with what we called, and it’s very well acknowledged in the industry as well, unbiased estimator of their accuracy. And the most important empirical result there was this accuracy persists. Those who are smart, they continue to be smart. And those who are not smart, they exhibit this herding behaviour. You know, they wait for the smartest analysts to make their revision, and then they all would make revisions in the same direction as well.
Jamie: [00:06:39] I remember when I was being on the sell side, we always had to be very cautious of any sell-side analyst who was covering a company that was a client of the firm, because you could almost guarantee they'd have a buy rating on the stock. And those sort of things you can't do much about. But those are things you have to be wary of.
Tarun: [00:06:57] Yes. Absolutely.
Jamie: [00:06:59] So Nirav, what I love to do on this podcast is do a bit of then and now and bring people up to speed on where the world of quantitative trading was 10, 15, 20 years ago and where we are today. And I'm sure people listening to this will be aware of Jim Simons, who kind of started it all off with Renaissance. So, what have been the permutations of quantitative trading over the years and kind of where are we today?
Nirav: [00:07:22] So if I just take a step back and start with what are the goals of quantitative investing? What do we quants try to achieve? There are four main objectives there. The ultimate objective is to make returns for clients. That's the ultimate objective. That's done first through alpha generation to find new sources of returns which are consistent across through dislocation opportunities, through arbitrage opportunities, or through data driven alpha signals to systematically apply risk management to manage risk and exposures in the face of quickly changing markets, to use technology and algorithms to make data-driven decisions quickly. It allows us to capitalize on speed and efficiency. And finally, to have diversification. So, to apply these models, algorithms across various asset classes and markets to find alpha. So, these objectives for quants over time have not changed over. They still remain the same. But the means to achieve these objectives have changed. And the two big differences that have happened now versus the past is the first is the integration and the use of more and more AI and machine learning-driven models. So, the kind of statistical techniques that were being used are a lot more advanced today compared to ten years ago. And second was use of large amounts of unstructured alternative data sets. Data is always integral to a quantitative investment process. And we've evolved over time from market data to fundamental-based data to technical data to now a lot of alternative-based data sets.
Jamie: [00:09:08] You mentioned one of the goals of quantitative trading is obviously a return. To what extent does volatility in returns play a part? Would you rather make 10% in a year with a nice, steady line climbing all the way through, or 12% in a year with your returns going from plus 15 to -5? Just to give a sense of that in terms of sharpe ratio, I guess.
Nirav: [00:09:32] I would not be an objective sort of at liberty to speak directly about returns. But the objective always is to make returns, which are steady and to take advantage of. So, volatility helps. So, volatility in markets helps quants, not necessarily the volatility of returns. So, what I mean to say is for example if the markets are very volatile, there are a lot of dislocation opportunities that arise within the markets which may be temporary. And quants using their systematic processes can take advantage of these dislocations. So, volatility in markets may not necessarily be bad. But then taking advantage of that systematically, the objective is to not have volatility in returns.
Jamie: [00:10:20] Tarun, Nirav was just saying that AI has changed a lot within the industry. To what extent or how do you use large language models’ artificial intelligence in what you do?
Tarun: [00:10:33] Nirav mentioned about this return forecasting, which I think is the most important problem in quantitative investing. My personal view on return forecasting is it's still a small data science problem, so you have state-of-the-art ML and AI algorithms at your disposal. And a lot of them are provided these days in this so-called low code, no code environment, but think of cross-sectional assets like equities or convertible bonds. You're looking at let us say 65,000 public companies for equities and you have their 30 years of data and most quant research, this alpha research, it's done on monthly data so 30 years of data, monthly data, it still adds up to like a couple of hundred thousand data points, which is small in my personal opinion so you could have as many number of predictors as you want. It could be fundamental; it could be alternative data. But what you are forecasting is a very limited data set, unless you are a high-frequency trading firm, or you are building these trading algorithms where you have data available every microsecond or millisecond. It's still a small data science problem. So, for such cases, one has to be very careful in using machine learning models. That's my personal view. On the other hand, when it comes to event predictions identifying bankruptcies or assessing credit risk, StarMine was, I think probably one of the first groups to have come up with a text mining-based credit model. And we released it as a product back in 2011, long before this alternative data was called cool. So, we used the brokerage research and used a simple Naïve Bayes which is a generative algo. In Naive Bayes, you look at the last word and you try to forecast the very next word. That was the algo we used. And it worked just as fine. Of course, now you have deep neural network-based thing which we have used, and they have certain advantages. But if you knew what you were doing and you had data available to you, one could do a lot of these things pretty early on. So, in event prediction now we have this merger and acquisition target model where we are combining fundamental data with alternative data. So, we have used an LLM, we've used BERT which provides you the context. This is an LLM that allows you to get the context, whether you are reading left to right or right to left, like if you are traveling from India to USA or traveling USA to India. If you missed the ‘to’ in between, it completely loses the context. And we used the BERT model to process this textual information that we have for companies and used it to assess their likelihood of being acquired in the next 12 months. So, we've had good success using these LLMs and state-of-the-art ML and AI in event prediction. When it comes to stock selection models, we still rely on our older approach, which is we try to understand as much as possible to what is going in and its reproducibility and its robustness. So yes, you do get that non-linearity from, ML and AI models, but nothing comes for free. There's no free launch theorem. So, if you're seeing higher returns and you cannot assess where they are coming from, then I think it's a little less useful than where you know is it coming from? Because that allows you to first understand that risk and then eventually manage it when you're using it in your portfolio construction.
Jamie: [00:14:20] Nirav, turning the conversation back to you. I do want to delve more deeply into how you go about building a quant research team, and how you go about hiring the right people to design these models. But first of all, seeing as we're talking about AI, can you talk a little bit more specifically because artificial intelligence is a kind of a catch all word? But for example, are your employees using ChatGPT? Are they using large language models and how are they using them?
Nirav: [00:14:48] So, AI models have been around since a long time. What has changed is the sheer kind of models that are available today. They are now being used across various aspects of the investment process. Alpha generation, risk management and asset allocation, portfolio construction.
Jamie: [00:15:08] And you can look for particular words and see how many times they are used and pick up on sentiment of the CEO or whatever.
Nirav: [00:15:14] Yeah. So, you can sort of give a score on the sentiment an example that a very unique example is say, for online SaaS companies, which is software companies that sell their software online. There are not many sources of alternate data to predict how they are performing. One unique source is user reviews because a lot of the information about those companies goes into those user reviews. So, a unique way of using say AI on an alternate data set like user reviews is to come up with a sentiment score on the user review, look at a time series of that, and see is there any information in there, is there any predictive power on the performance of the companies based on the user reviews? So that's an example of how something like an NLP or a ChatGPT can get used on alternative data sets.
Jamie: [00:16:02] And moving on to actually building out a team. Nirav. I actually have been reading more and more articles of people saying that they're struggling to find talent, and maybe that's always been the case, but when it comes to hiring are you looking for people with more scientific backgrounds? Do you have engineers that work for you who are the sort of kinds of people you're looking for?
Nirav: [00:16:23] So, we very strongly believe that for innovation diversity is important. And diversity not just in terms of race or gender, but also in terms of academic backgrounds. So, when we look at people or quants, we look for people across a large number of physical fields. So, they could be PhDs in astrophysics, working on large amounts of telescope data. They could be PhDs in biology, working on large amounts of DNA or gene data, applying a lot of their machine learning statistical techniques to, say, identify relationship between DNA and diseases, things like that. So, we hire people from various academic backgrounds who have experienced typically working on large amounts of data. And to add to that I think one core skill that we look at is the ability to program and use tools like Python to work on the data is critical in this particular role. So, we specifically look for people who are very comfortable with programming. Today with the advent of tools from the AI based programming tools, it's actually become extremely easy for anybody to code very easily. You can actually give simple instructions in English to a ChatGPT-based tool or a Codium-based tool or an AWS Whisperer-based tool where simple English instructions are converted into code by that tool. So, ability to code and work with large amounts of data is a very critical skill. In addition to that another thing that quants need is data engineers and programmers just to be able to build the data lake and the data infrastructure that would be needed to apply a lot of these techniques.
Jamie: [00:18:05] Now final question for you, Nirav, on the hiring side of things, because I'm sure there are a lot of people listening to this podcast who'll be interested in getting into this world. So, when it comes to interviewing with someone like yourself for a job, do they come in and it's problems on the board that you try to get them to solve? Or are there actual personality traits that you look for as well? Like what's the sort of character of person you're looking for?
Nirav: [00:18:28] So just to sort of complete, we do look at the technical skills that I mentioned, which is ability to work with data programming and statistical skills. As part of our interview process, we actually have a live test case or a case study, which the interviewee sort of solves while somebody is with them on a call to understand the thought process. It's like a research case study that gets given. In terms of soft skills. What we look for is obviously strong communication skills to be able to interpret the analysis and the results and also attention to detail. I think when you're working with large amounts of data and a lot of these machine learning AI models, having attention to detail is a very important skill that we look for.
Jamie: [00:19:12] Tarun, turning it over to you. Having done quite a few of these podcast episodes now, it seems to me that a lot of hedge funds who may have been more focused on macro or more focused on equity long short, are looking for other strategies to move on into. Do you feel that more hedge funds moving into the quant space, and how do you see growth in that area?
Tarun: [00:19:34] They are on a constant search to find uncorrelated assets.
Jamie: [00:19:39] When you say they are on a constant search, do you feel like that's a generic comment about hedge funds in general?
Tarun: [00:19:45] Yes. Because when your objective is to maximize returns, as Nirav was saying, or maximize risk-adjusted returns when you take into account for volatility. So, you do want to look at assets because if you are a value-focused fund and you know value isn't working. So, the maximum you could do is you can manage your downside risk unless it's mandated that you could only be taking bets for value, there are no real opportunities for you to make money to get those returns. So, there is this constant search. And if your mandate allows you to do so I think you would do it and it could come from using options to using macro, as you said, like doing currencies, treasuries or commodities. So, we do see that shift and as somebody who's been on the product side where we sell these products, we do get asked these questions how is your stock selection model is correlated with this other asset class? Because what happens is especially when the market is volatile, I mean like it or not, a lot of these assets start to become correlated. The negative correlation between equities and bond, it works when things are sane. But when the market becomes volatile and you need to add something else, you see that they are highly correlated. So, whatever little I have seen in those six years when I was at a hedge fund switching between like, understanding macro, although we were managing an equity portfolio, was a very integral part of that risk attribution. Back then it used to be oil prices and the CPI, the inflation and the related things. But finding these assets which are uncorrelated to your strategy and then coming up with strategies to incorporate it in your portfolio optimization is something that I think almost all the hedge funds would want to navigate these choppy waters.
Jamie: [00:21:57] So, Nirav, that brings up an interesting point, actually. I've always wondered when it comes to quants, to what extent any human intervention is allowed. If something happens, call it a black swan-type event, a war breaks out, something that most models don't really have. I mean, things can happen which are just beyond the realms of prediction. To what extent do you have a committee in place to step in and say, okay, we're now not going to let the models run money? We're stepping in and we're pulling back. To what extent do you have those practices in place?
Nirav: [00:22:34] So we have an investment committee that is constantly monitoring the portfolio, that is constantly monitoring the models. That is a continuous process. Having said that, any kind of human intervention for us is viewed as bad. So human intervention in our strategies is very minimal. It can happen. So, for example, as the talk of Russia invading Ukraine was picking up, the investment committee, sort of picked it and decided not to invest or take positions out of all positions, out of Russia and across FX and equities. And same with Ukraine. So, such swan black swan events. The investment committee will come in and take decisions. But that's rare. Only under cases when we know that the kind of risk that is being considered is not considered as part of the models.
Jamie: [00:23:28] So other than those very exceptional events, you do need to have nerves of steel. You do need to be able to sit back and trust the models.
Nirav: [00:23:37] Yes. I think just having done this for ten years now, I would not contest that conclusion. The conclusion, what we have seen is let the models run. If we’ve done our bit in sort of very carefully calibrating the models in terms of risk, calibrating it testing it over time periods across asset classes and constantly monitoring it. Any kind of human intervention is bad. The models themselves, we do use a lot of AI-based dynamic allocations within our models that are just based on the environment. So as the environment for different type of signals become bad or worse, the allocations to those models self-adjust. So human intervention is bad. The investment committee constantly monitors, but very rarely interjects.
Jamie: [00:24:26] So, Nirav, do you have the same kind of rules and regulations when it comes to your relationship with prime brokers, as for example, a long, short equity hedge fund would have with their prime broker? Or do you, because of how fast you can trade or perhaps you have lower volatility on average, you can have more leverage from prime brokers. How does that relationship work?
Nirav: [00:24:48] The relationship at a very high level is similar and in line with most hedge funds. There are risk parameters that have been agreed with the prime broker. The positions are within those risk parameters. The prime brokers do an extensive due diligence before onboarding a client or a fund. So, a lot of these parameters are agreed in advance and parameterized both within our models and at the prime broker side.
Jamie: [00:25:17] During the same conversation, we actually had a very interesting discussion around the topic of whether a hedge fund can ever become too big. In fact, Mithra, on a previous call, she said one of the first questions she'll ask a founding partner like yourself who's starting a hedge fund is how big do you want to be? Because at some point for whatever reason, liquidity constraints, volatility constraints, you hit a ceiling. So, you've obviously asked yourself those questions, Nirav. But do you think there comes a point where a hedge fund can come too big?
Nirav: [00:25:49] So I think the important parameter to remember is at what size does your performance get affected? So, for example, for one of our products which works on faster signals and uses a lot of these AI, ML techniques the capacity is limited. And that's deliberate because the objective is to maintain a higher Sharpe. To answer your question, as long as the capacity depends on making sure that the alpha or the performance of the product does not get impacted. And that would depend on the asset classes being traded, and also on the kind of signals that are being used to trade them.
Jamie: [00:26:34] Believe it or not, we are already coming towards the end of the podcast, but not quite yet. So, Tarun, I'd like to turn it back to you and ask a slightly more general question, which is, when you speak to clients, what are the current challenges you feel they face today, ranging from trying to find the right talent? Even keeping up pace with the new technologies in the world of AI. What sort of requests are you getting from clients to try and help them get edge.
Tarun: [00:27:05] We have two types of clients. One is this small to mid-size quant shops.
Jamie: [00:27:11] Can you actually say, what's a small to mid-sized quant shop? Just to give our listeners an idea.
Tarun: [00:27:16] So anywhere from less than half a billion, $500 million AUM. So those are what I call as small to mid-size clients. So, for them, the biggest challenge that we hear or the reason they come to us is to evaluate this buy versus build question. For them to build a research team in-house and then build a strategy and launch a product versus buying models from us. So, for these shops, that's what we get the most, they would want to understand how the model works. We build clear box models, we write white papers, and we tell them the anomaly that we are trying to capture, and then as we'll estimate this capacity constraint or is it a strategy that only works for the small-cap names or whatever? So, these are the type of things that they would want us to help them with explain the model, explain the research we did. And if they like it, they will buy it from us and would not have a big research team in-house who would essentially be doing. So we become their team of PhDs who've already done the research and have explained to them the results, and then they just buy it from us. On the other spectrum for big clients, they have a tendency those who manage billion dollars and above to buy the raw data and build everything in house. So, they would not be using our models to directly trade. So, they use our models as these derived analytics that they would want to combine with their strategy and see any residual value added. So, these are at a very high level two ways where we cater this entire spectrum of quant clients.
Jamie: [00:29:14] Nirav, perhaps just a final question to you is there such thing as a better environment for quantitative strategies? And perhaps you can also just follow up by saying things you're most excited about in this world and what kind of new ideas you're working on to try and maintain that edge.
Nirav: [00:29:34] I think there are environments for certain strategies. Absolutely. There could be strategies that are designed to provide downside protection during periods of market stress. So, in periods of market stress, they do well. So, for example, that environment is well, for that there could also be environments that are exciting for certain types of signals within a strategy. And based on the environment, those signals do well. And it all depends on how the strategy is constructed to take advantage of that. I think the three big things that we are very excited about is one, the technological advantages and the innovation that is happening and the use of more and more AI, ML based techniques that are becoming available and the large variety of tools that are being available in the market today, the pace of innovation is very fast. So, very excited about that. As humans, we are generating more and more amounts of data, and that has led to the availability of a lot of alternative data sets and big data, which is, again, exciting from an opportunity perspective. I think third is a move towards faster signals and that is, again, something that's very exciting for us.
Jamie: [00:30:53] I feel like we've touched on a lot of areas, but we go into so much more detail. I forgot to ask you at the beginning, but if people did want to get in touch, they are okay to reach out to you on LinkedIn or however is best.
Nirav: [00:31:06] Absolutely. Yes. LinkedIn would be the best.
Jamie: [00:31:09] LinkedIn would be the best. Okay, great. Well, I think at some point we need to do a part two of this episode because there was so much to unpack. But we've run out of time. I want to say to Tarun, thank you very much indeed, and to Nirav Shah. Thank you so much indeed. I really enjoyed our conversation.
Nirav: [00:31:22] Thank you.
Tarun: [00:31:23] Likewise Jamie.
Jamie: [00:31:25] So thanks everyone for listening. That was another episode of Hedge Fund Huddle. And if you don't already like or follow us, please do so now. It's exactly how you hear about our next episodes. Also, we want to hear from you. So again, feel free to write to us or leave a review. The information contained in this podcast does not constitute a recommendation from any definitive entity to the listener. The views expressed in the podcast are not necessarily those of LSEG, and LSEG is not providing any investment, financial, economic, legal, accounting or tax advice or recommendations in this podcast. Neither LSEG nor any of its affiliates make any representation or warranty as to the accuracy or completeness of the statements or any information contained in this podcast, and any and all liability therefor, whether direct or indirect, is expressly disclaimed. For further information, visit the show notes of this podcast on lseg.com.