Computed Aided Engineering (CAE) allow engineers to run Computational Fluid Dynamics, Finite Element Analysis and Thermal simulations. The software is built and maintained over years with many contributors as open source in large C++ codebases. Simulation software was designed for running on a desktop client. SimScale run these in the cloud as part of a modern mircoservice architecture. Anatol Dammer on of 5 co-founders takes us behind the scenes and explains how SimScale have taken large, difficult to scale, legacy codebases and built a microservice architecture using modern programming languages.
As one of five co-founders, Anatol started work on the backend and infrastructure of SimScale while studying Computer Science at TU Munich and Georgia Tech. Today, with SimScale being both on a sound technological footing and a successful business, he and his team work on anything infrastructure – from typical topics like CI/CD, data storage, or provisioning cloud resources cost-efficiently and conveniently as needed by users and developers, to cross-cutting concerns like security or privacy. He also enjoys finding great new talent – if you want to find out more about the unique and fun challenges at SimScale. Get in touch on LinkedIn, Twitter, Facebook, YouTube or Instagram.
Rob: Welcome back to Starting Engineering a podcast that goes behind the scenes at Startups. I'm Rob De Feo Startup Advocate at AWS. Together we'll hear from the engineers, CTOs and founders that build the technology and products of some of the world's leading startups. From launch through to achieving a mass scale and everything in between. Experts share their experiences, lessons learned and best practices. In this episode, our guest Anatol was one of five university graduates who co-founded SimScale takes us behind the scenes of how they built an online simulation platform for CAD models. Anatol and his team work on anything and everything infrastructure from data storage, provisioning cloud resources through to cost optimization, privacy and security. Anatol can you start off by telling us about SimScale and what problem it solves for your customers?
Anatol: Yeah hi Rob, totally SimScale is there for anyone who builds physical products, do a design validation digitally on a physical level before they actually prototype or build their product. It could mean anything from an electronics enclosure that you want to test for ventilation efficiency up to race cars where you want to look at aerodynamics or full buildings and bin load on full buildings. So anything in terms of fluid flow or structure mechanics that you want to test before you actually build something, that's what you can do with SimScale.
Rob: What you're building sounds like it applies to most physical engineering or real world products. I've seen before computer design or CAD software that runs on people's local machines. Are you build in like a CAD for the cloud?
Anatol: CAD software is actually one step before us in the workflow. So the CAD software is what you would use to actually create a model of your product that you want to build, the same as with simulation software. And all of these softwares have traditionally run on the computer, on the desktop, sometimes with your local cluster for running large computations. But what we've seen in this industry is that CAD softwares have already moved the browser. We you simply expect that the next logical step and we also get this feedback from our users and customers, of course.
Rob: What are the reasons that you're seeing for people wanting to make this shift? So instead of running it locally, they're choosing or needing to run in the cloud.
Anatol: There's actually various benefits and different sort of dimensions. One is simply the effort to get started. You don't have to buy a large hardware cluster or workstation to run your large simulations. You can basically just run them in the cloud, have it arbitrarily scalable. You can run as many at the same time as you want. That's one huge advantage already. What we also think it's a huge benefit of SimScale is simply commitment that you have to make. So if you buy a desktop software, you have to usually make a large commitment in terms of not just buying hardware, but also training, buying licenses, which are usually annually. You don't pay per use. You have a fixed costs, which is quite large in most cases. Using SimScale it's just like any traditional SaaS software. You don't have to install anything. You can pay as go. We actually measure how much you use on the computational side and only charge you for that. On top of all of this are the typical benefits that cloud applications have things like collaboration where you can work together with your colleagues on projects and easy sharing of results and so on and so on.
Rob: It's really quite a different SaaS product to many that already out there. What I mean is that a lot of what I've seen for SaaS is essentially modeling access to a database with like a request respond to a pattern on top. I'm thinking, you must have built something quite different for your customers. Can you talk a little bit about some of the differences between a common SaaS solution and what you've needed to build?
Anatol: Yeah, you're exactly right. So in this area in which we're in, which is computer aided engineering, it's just simply a quite complex application domain. Which means there's different kinds of simulations that you can do structure mechanics, fluid flow and so on. And even within those there's a large array of different kinds of simulation software based on different principles. Both on the mathematics and also actually how would you write the software? Where we started was basically using two open-source softwares OpenFoam and Code_Aster. From there we've matured and integrated many more sort of simulation softwares. There's also something to be said for becoming more like a platform for integrating more solvers in the future. The basic challenges is, as you already alluded to, it's quite computationally heavy. You have these large batch jobs which you have to run. Which run sometimes for days or even weeks. You have to make sure that they actually produce the right data. But you also want to have some analytics and see if they actually doing the right thing so that you don't charge the user for something that he can not use in the end. And on the other side we also have before you can actually run those large jobs we have a complex workflow to actually get to a point to specify those jobs and make sure that you have the right setup to actually have the right parameters to get the right results. And this is something which is more interactive. There the challenge is that for example, we have to interact with the geometry file of your product so basically the 3D description of what you want to build. We have to interact with that in the browser for example, to find certain constraints on the simulation and that's something which is also tricky but you can also go into more detail.
Rob: Curious about how you converted your software into a SaaS solution, to make this work did you have to do something special? Did you have to adapt to rewrite the libraries? Or was it a matter of creating the Docker files and then deploying them as containers?
Anatol: Oh yeah, this is actually quite involved topic, as I'm sure that any of your any of our engineers are happy to talk to you about. This is also the reason why, I mean if you look at the market SimScale is basically the market leader and there's not too much competition. The main reason is that this is one of the biggest problems to actually get this to work. If you look at all of these libraries that are out there and simulation software solvers all of them are, almost all of them are legacy software. Written over decades, huge amount of contributors often very sort of scientific oriented code I would say, and not necessarily the best software architecture in all cases. Which brings many, many challenges. Even just using them yourself on your desktop is tricky. Just getting it to compile, actually to run, produce the correct results even after its compiled, it's tricky. And if you then want to actually make a micro-service or more like a modern cloud application out of this and we chose microservice approach for reasons which can also go into.
Rob: Defiantly want to hear more about why you pick microservices and how you implemented that, but to consider using legacy software, is that how you started off?
Anatol: Yeah, of course we did not. So like most startups, we started with a monolith because it just seemed easy and wanted prototype and get something out the door and make sure that if a proof of concept. You know to also show to investors and because it was clear that you're going to need money and hire more people to actually grow this idea. We started just with basically one huge chunk of application, but luckily much, much, much smaller and well-behaved. You know, this worked well for the first few years I would even say, took quite a while to get to a point where this solution was ready for the market and ready for the first external users. But of course this is just something which doesn't scale it any more at some point. I think it's actually an argument to be made in some cases, that monoliths can work out and can be the right approach to certain problems. But especially in our application domain where you have a complex workflow with different parts which are actually implemented by different teams. It just makes a huge amount of sense to sort of decompose the system and have microservices. It is something that we started luckily after a few years after hiring also some experienced developers who knew how to actually do this. Yeah, this is what he's been doing ever since and we are quite happy about this.
Rob: A common pattern that I see is startups, they'll take or build a monolith to start off with and then deploy and continuously iterate on top of this and then break it to microservices. What was or was their particular event or action or time? I mean, when did you know you should be building it as microservices rather than a monolith?
Anatol: Yes. Yes, definitely. The first two years were basically spent building a prototype that you know, worked it produced results, and that was good. But when we then started actually go into users and customers with had which had real problems. Not just test cases that we're running and maybe having some beta users, which were quite experienced themselves. Before that there are certain challenges that you didn't expect, or at least not in this magnitude. For example, people are bringing all kinds of different geometry files, describing all kinds of different products, having slight defects which you have to fix before a simulation. Especially in this part of our system this CAD processing or this geometry processing of CAD files. We realize we have to much, much, much improve our system. We hit a roadblock with the existing services that we had written and it was clear we have to use existing solutions which are market proven and used by all of the big simulation software vendors as well. In the end we realized some of these solutions, well actually all of them a written for example in C++. They are mostly used in legacy codebases which are huge C++ applications. Which have all kinds of challenges in terms of maintainability and efficiency in development. We spent a lot of time actually looking into this, playing around integrating C++ codebases. At some point it just turns out to be, it just took too long and to be sort of sat back on and ask ourselves "OK, I mean, we have to be faster?" We cannot afford hiring 10 developers just make these basic integrations work into our system.
Rob: I know that working with C++ code, especially these large codebase is can be slow going. In your case, you can't exactly afford to hire your way out of the problem. So what do you do to develop quickly against these hard to maintain systems?
Anatol: Exactly, exactly. We had a bright idea, right. One of our senior developers. And I mean I really have to say this, we actually acquired small team but we have some excellent people on the staff which have often very creative ideas. Which is just something we need in this company a lot and which is proven to work out great. He had the idea that, this is sort of new language at the time called Go or GoLang which has some nice properties. First of all, it's a modern language that has garbage collection, it's quite safe to use, it's actually quite easy to learn to. Even someone who doesn't have much experience in that language at all, you can usually get them on boarded in a matter of days. Doesn't mean that they write the perfect architecture but they can actually get working. This is one great property, so it's like a safe, modern language which can use especially for web applications it works quite nicely. The huge advantage or the huge leverage we saw for us is that it has great integration with C++ API's. If you look at integrating C++ libraries from Java for example it's terrible, right. There's two different frameworks for that and they are both really, really inefficient and it's just not fun at all. This developer has this idea "why don't you just take the C++, libraries and components that we want to integrate into our system and just wrapped them in a layer of Go" Where we can actually easily use the C++ API but we're sort of not slowed down by integrating into a microservices system, create API calls, you know, implement monitoring, pending error codes and so on. This is much, much easier and faster to do in Go and this something which we tried in a sort of prototype and it's just worked extremely well and used this ever since for all sort of external C++ codebases or C codebases we want to integrate. And it's been a game changer.
Rob: That's awesome. So you can also use a modern language like go so you can have fast development and all the features that come with that, but still be able to use the power that's been built over a long period of time into this C++ libraries. Is that the approach that you've kind of broadly taken when it came to also breaking down the monolith as in taking these components, putting a wrapper out in front of it?
Anatol: When we started using it, it was more like a new development. Where we basically had an old service and written internally which we had to replace by integrating external libraries. This was basically where we started, This was the first project which we had for that. But also now we also using it basically anywhere we have to write a microservice or a web service which integrates with C++.
Rob: Now I imagine what appears to be a set of GoLang wrappers that present these libraries as endpoints. Due to the compute requirements how does your software manage these heavy or long running jobs?
Anatol: We have a complex workflow to get from a CAD file for a product to a simulation and a simulation result, there's actually multiple steps in between. There is a certain part that's interactive So basically the user uploads a geometry file, a CAD file describing their product. Then you actually have to analyze this, generate metadata, generate visualization files that you can actually display in a browser for the user to make this work well. There's also something which we're working on right now where users can actually make small modifications to that file in a browser, which is something like a small CAD system in the browser. Which is I think also going to be another game changer for us and our users. Then you actually have to do many, many things for are interactive. If you have a valve for example, and you want to simulate or you want to simulate the flow through the valve you basically have to click on the valve and say, "okay, here water comes in or different material at the speed and so on and so on". There's a large amount of work thats interactive and this is the part that is running as microservices. For example for CAD handling, geometry handling and so on these are microservices and they're running on ECS. This was a few years ago, we looked how do we run this? When we actually started SimSscale ECS didn't exist yet so there was was basically EC2, there was beanstalk, but there was no ECS, there was no Kubernetes really, or at least not in a usable state. As soon as ECS became available we quickly migrated to that, because before we were basically still running this monolith. So we started breaking things apart and moving to ECS and microservices.
Rob: So you sort of hinted there that with the release of ECS, this was one of the piece of technology that helped enable you to break down your monolith. Can you kind talk us through a little bit why that was?
Anatol: Before we were still running in this monolith, I guess actually multiple things converge. First of all we realized both from maintainability perspective of this monolith, legacy code, and then having much more developers that we want to be able to efficiently work on our codebase. Decided ok we have to go in this microservice direction. But then we actually spent a bit of time sort of hacking this together ourselves running microservices on instances ourselves and making this work with our own tooling. Actually even still outside AWS in parts because back then we had a part of our services outside AWS, which was also a big mistake cost us a lot of time to actually maintain all of this infrastructure. The large best simulations there actually were already running in AWS because there was no other way of doing this from the very beginning of the company than EC2. We said ok multiple things we have to break apart this monolith a bit more have more microservices. What we don't want to maintain is infrastructure and logic for releasing all microserivces anymore. Also we want to actually run all of our services inside AWS, so not just large batch job but also the interactive services, run all of this together. Then you basically look, and before that it was only, as I said EC2 and I think Elastic Beanstalk and no ECS. And ECS simply offered kind of exactly the platform that we needed. It's something which takes Docker images or Docker services and you create clusters you run them there and all of these sort of heavy lifting of managing releases, draining connections to old versions of your services all of this is basically handled by ECS. So we just started migrating everything there.
Rob: You mentioned about some batch jobs running but you didn't exactly say where they were. So are they running inside ECS along with everything else? And did you have to do anything in particular to make it work?
Anatol: Oh, yeah. Good point, good point. Yeah, I didn't get to that yet. This indicative part of the workflow where people set up simulations and specify all the parameters this is what is interactive and this is what is running in ECS. From the very beginning of SimScale what be we're not running in ECS or what we were running in EC2. Were the actual simulation jobs themselves because both to be able to control the different resource requirements that jobs have that's that's the first reason. The second one is because it's quite spiky workload and we felt that of course, you can also scale up and down ECS clusters but we felt that it makes sense to sort of separate these workloads. They have quite different resource requirements so it's not so easy to actually pack these kinds of jobs together on one cluster and have an efficient packing of these jobs. You also have cases still and it is actually going back to the very beginning of our talk. Sometimes these legacy softwares as they behave somewhat pathologically right, at least in the beginning and now it's much, much better. But we still have sometimes cases where some of these softwares go into states which heavily affect the instance overall, where they take up all the memory for example and start killing things. This is also one reason why we want to separate jobs from different users as much as possible. So yeah this is all running in EC2 right now.
Rob: I see that layer of separation being very helpful especially if something goes wrong. A lot of what you built sounds like it could work with queues. Is that the way that you've architected it? And does this mean that it changes the way that you can scale up and down the EC2 instances?
Anatol: This is actually also quite interesting, especially because this is something which he implemented literally like the first one or two weeks of SimScale we ran it a long time and it worked well and served us well. But at some point we had to professionalize this a bit more and make it easier for users and better for users. Yeah we did actually many, many things maybe we can highlight one or two of them. So one thing that proved to be a problem at some point is that in the very first two years of SimScale the main point was to get us to run at all or to make it really work, to get correct results, to get good results and so on. Once we even past that point a long time ago already the bigger point was became to really optimize the user experience. So make sure that our users have a snappy experience that they don't have to wait a long time to actually get results. One problem that we had there is that there are huge software packages. Even if you dockerize them sometimes these are multi gigabyte packages of software applications that you have to load somewhere onto an instance and run. We had complaints from our users and also from our internal analytics that showed sometimes a user clicks run this job for me and then it takes minutes to actually start getting results. At some point it became acceptable especially for quite small and big jobs users wanted to run maybe just to test something. We started using analytics looking where does this time actually go? What happens? There are certain things which you cannot improve a lot. For example if you start an EC2 instance it takes a minute or two to start up. This is something that is inherent that I think Amazon has a lot of things to actually optimize this to make it as fast as possible. But there's just a certain amount of time that you are always going to have. We started optimizing around that a bit having a pool of instances available at all times. But then you have to be smart about, which kind of instance you put into this pool. You always have inefficiency because different jobs have different requirements. This worked out a bit but not too great. Then we dug a bit more, we actually I mean, there were many iterations in this process. But we ended up optimizing which had the most impact I would say, is that we saw that sometimes it takes a long time to actually load all the software on instances. What we had was a precooked AMI basically an image which contains all of our software what we have to run and his image was about 80 gigabytes, which is a lot. If people are used to modern applications you'd have certain stacks which require larger compilation products and so on. But 80 gigabytes to put it on an instance to run jobs it's a lot. Of course we optimized this, we cut it down, we made it smaller. What we realized in the end is that if you started and EC2 instance the image going to be running on that instance has to be fetched from S3. Which is nice and good, but turned out this process is sometimes a bit slow and this is something which we found to be a huge factor. What we did to address this "we said okay, we tried to optimize it, we just didn't manage to optimize this too much". So we created a separate volume which contains all of our software. We keep a pool of these volumes which is much cheaper than running a pool of instances which are much more expensive than just keeping EBS volumes available. We maintain this pool we make sure that it is always up to date, that our software update on those volumes. And once we create job instance we attached on of those volumes there while from this pool and we have all of our software available and it's pre-warmed and ready to run.
Rob: Well, that's good so you are reducing the amount of time it takes to start an instance even if it has a large amount of software because your AMI has a small install footprint and on the EBS volumes you have the software installed on them and just attach them as you go. Because you reduced the time it takes a spin up an instance you can have a smaller pool of compute as well available. And then are you even taking it further? Do you have a mixture of on demand as well as other pricing models like spot?
Anatol: Totally, totally. In the beginning we only use on demand instances because we were worried about termination's but especially because we also a freemium plan where you want to optimize at least that from costs, even though we also get value out of it by having great simulation projects which are public. We wanted to optimize some costs here a bit as well. We experimented using spot instances and something which is quite nice that Amazon recently or I mean now it's also been a year I think started offering. Is basically capacity optimized instance pools. What happens is you tell Amazon which kind of resources you want to actually have in your cluster and Amazon picks the instances types for you. Using that we actually we were able to bring down spot terminations so we are saving money and having a more stable system.
Rob: Is it just CPU instances you use or does some of your simulations require GPU? And if so are there any different considerations you need to take when using in the spot market?
Anatol: A while ago we started using a GPU based simulation software solver which we use actually because it's extremely scalable. So we can use it to actually simulate entire buildings or building blocks in cities, which is great. And Amazon luckily has GPU instances available which is perfect because they're incredibly expensive to purchase yourself, so this would be completely unfeasible. One challenge that we've seen though is that at least where we operate, of course Amazon has multiple regions which have different amounts of instances available and different ability zones and different instances types. One problem that we saw is that at least in our region there's not always enough GPU instance available of all types. So that's something that we're struggling right now a bit with. Of course one solution for this would be to say okay if we have batch jobs we can run them any time, you just wait until maybe demand for GPU instances goes down and you run them. The thing is of course that even though they are large and long running jobs our users they would like to have results fast. Some cases if a job takes days maybe this is not so critical but the rest of the cases, especially with this extremely scalable solver where you can actually have complex results in a short amount of time people would like to have them quickly. What we're looking at here is that it seems like the best solution is to go multi region. We see in other regions if there is more of these types of instances available and then it becomes more about having tricky tradeoffs. For example if we run these jobs in different regions where theres much GPU instance available you have to pay either for a data transfer and then you have to see if this is worth the cost or we have to replicate some of our data processing services into other regions. Where we have to see if the fixed costs of running the services or scaling them up and down offsets the costs or data transfer.
Rob: You mentioned earlier about having a browser based editing tool, this sounds like it would have different compute needs to your simulation jobs? Can you talk about some of the differences between the two architectures?
Anatol: This is actually a feature which we're extremely proud of because none of our competitors have it yet. Our users sometimes want to make small modifications to their CAD files after they upload them. Having a small in the browser CAD editor not a full fledged one because that's quite complex, but just a small one to make small modifications. So we actually leverage some nice approaches that we've learned over the years to make this work well. One of them is using Go to package C++ applications to wrap them kinda of and also using these interactive services that we have built so far at SimScale. What we did is we purchased and licensed the worldwide best CAD library or CAD kernel, is the right term used in the industry and we wrapped this with our own services. In the end we did not want to use ECS because ECS works well for many use cases. But in this case we have a situation where we have to create sessions for this service on demand. So if a user comes to SimScale they upload a CAD file they have to run a session of this external library that we licensed, this is simply limitation of this library and we have to basically create this session somewhere. We realized this doesn't happen fast enough. So if we scale on ECS to actually start that task takes too long. What we experimented then with is using actually Fargate which is kind of built for exactly this. Fargate is more built for this one off short tasks and they often also start faster. It actually worked ok the main reason why we ended up not pursuing it back then and actually rolled back to using ECS and making optimizations to start these sessions faster was simply costs. So back then the Fargate pricing model was not a good fit for this, now this is actually might have changed again in our favor because Amazon changed a bit the Fargate pricing model. So this is something which actually considering going back to.
Rob: The pattern I see the most in startups is a variation of the N tier architecture, which has multiple layers where the presentation, application processing and data management functions are separated. With this architecture, startups can interact quickly and then scale out their application when needed. Usually you can implement application processing or business logic by importing libraries and then building on them in your language of preference. The simulation libraries SimScale use, a large legacy codebases written in C++, a language that Anatol knew they could not iterate quickly with. Unable to hire themselves out the problem they had to find another solution. So they built what looks to me like a facade layer in GoLang. A thin layer that interacts with the C++ library exposing the functionality as an API that they control. Hiding much of the complexity of the underlying codebase, this makes it easier and faster to build on top of. GoLang is a compiled language it's easy to containerize with short and clear Docker files and using containerization SimScale could then start to create services and break apart their monolith. I find that breaking out long running jobs from a monolith as single services is usually the best place to start. Services do one thing and this makes them smaller and simpler to understand as they use a fraction of the codebase. They can then be scaled in increments based on the queue length. And since they burn in a separate process even a long running queue will not affect the rest of your application. Simulation jobs can run for hours or even days building these services and using queues creates a fault tolerant architecture making spot instances it's a great way to save money. Spot instances are the unused EC2 capacity offered up to 90 percent discount on on demand prices. SimScale used these to reduce their costs and even offer a free tier. By being flexible with the instance type, size, availability zone, and even region they can have the biggest discounts and availability offer. If for whatever reason they can't get enough of the compute they need then they can fall back to using on demand. One pattern that was really interesting to me was how they use EBS volumes to speed up making an instance available. They needed to quickly complete small simulation jobs for customers but they have software packages that run in the tens of gigabytes. From their testing they weren't happy with the speed of getting an instance started from an AMI of that size. They did something though that I've not seen before, which was to have a pool of EBS volumes ready with the software installed on them. Then when an instant starts they could attach the prepared EBS volume to the EC2 instance dynamically. Based on their success this appears to have balanced out the cost and performance for their needs. Simulation jobs could need large CPU or even GPU instances and having a warm pool of these available would be expensive. Instead having ready EBS volumes has improved their startup time and overall reduced their costs. Let's get back to Anatol to hear about what he's learned while building SimScale and the best way to onboard developers and advice on how to start a startup. You've lot over the last eight years building SimScale. If you go back and do one thing differently, what would that be?
Anatol: Yeah, I mean, this would be of course a total game changer because we've done so many things especially also from hard work which you can not replace by anything else, just experimenting, building things and failing and doing better next time. Almost all startups learn this lesson that starting from the beginning having good continuous integration approach, that you don't build things and deploy them by hand. It's extremely messy to clean up at some point and just as a huge drag on efficiency. Maybe not starting with a monolith even though that's a tough choice to make because microservice approaches have some overheads which is maybe too much in the beginning. At least making a switch a maybe more sane point before we have I think it was 30 developers or 25 developers working on our code already and being extremely impacted by the inefficiency of our system. And then of course picking the right services from the beginning, for example being smart about picking ECS from a very early point on and not running our Docker services ourselves or with home built scripts.
Rob: You mentioned that you actually acquired a group of developers. So when you have developers join your team what are the best practices that you found for onboarding them? Are there any resources or particular places that you point them to to get them started?
Anatol: There's actually something which is also closed to our heart which was not great in the past and it's much, much better now as well. I would not even start talking about resource to be honest, one of the main things which you can do to make a developer's life easy is just having a good architecture in place, something which people understand intuitively or which isn't arcane and written in a way that you actually have to read a lot of documentation to understand it. It's much better if you have a system that is so intuitive and makes so much sense that just kind of clicks when you're looking at it, that I would say really like a huge win. Also for example having things like good tests in place which both describe how the system works and also make it safe to modify and play around with it. we also have documentation for example which we point people to introducing them to all the different tools that we use, introducing them to Amazon, which kind of services in Amazon do we have, what we use them for, how do you interact with them. We actually have a bit of our own tooling built around ECS for managing deployments, managing service versions and so on. Of course how to use that, the tools themselves also written a way to sort of self-explanatory but at least some documentation of how to use them it's quite important too.
Rob: Now if we put you in the shoes of a mentor for a developer that wants to go and start their first startup. What piece of advice would you give to them to set them off on the right path?
Anatol: Interact with excellent people. I simply believe that SimScale couldn't exist if he wouldn't have quite early on fund just a great team of people and maintained the standard and maybe even leveled it up more. Making sure that every addition to the team is someone who brings in knowledge, who brings in creativity, and is also just a pleasure to work with. I believe still the biggest advice is critical to any business, make sure that you interact with the right people, smart people are there to help you have creative ideas and friends about solutions.
Rob: Thank you for sharing how you've built, scaled and architected, using difficult to maintain legacy code basis, but also your lessons learned and best practices. If you're excited about building the next big thing where you want to learn from the experts that have been there and done that, subscribe to startup engineering wherever you get your podcasts. Remember to check out the show notes for useful resources related to this episode including blog post by SimScale and Anatol's team and how to get in touch with them. Until the next time. Keep on building.