Securing Cloud Native Communication, From End User to Service
April 14, 2020 | 38 min read
Everyone building or operating cloud native applications must understand the fundamentals of security issues and modern threat models. Although this topic is vast, in this talk Nic and Daniel will focus on the end-to-end communication and higher-level networking threats, and explore how the combination of an edge proxy and service mesh using TLS and mTLS can be used to mitigate many man-in-the-middle attacks.
Daniel: Welcome everyone, thanks for coming along. Appreciate there is a lot of like good choice at this conference, isn't there? So really appreciate you attending at the end of day. We'll do our best to make it fun.
Daniel: So if you're not here for securing cloud native communication from end user to service, now is the time to escape. Yeah. If you are here-
Nic Jackson: Or the time to stay.
Daniel: Indeed, yes, and learn something different. So we'll introduce ourselves in just a second, but just to set up a bit of like tl;dr, for the millennials in the room, we are seeing an increase, even though we're at a cloud native conference, the reality is that many of us have got, you know, data centers, private kind of cloud. We're not all on the public cloud yet. There's definite drive towards app modernization, hybrid platforms, cloud, internal DC, that kind of thing.
Daniel: But decoupling applications and infrastructure seems to be a winner. Workload portability, take advantage of sort of ML in the cloud, these kinds of things. To decouple, you need to use things like containers, packaging and also things like more advanced networking, which we'll cover today.
Daniel: You need to do it incrementally and the key one today we're going to discuss is you need to make this move securely. All security must have good developer experience, good user experience. How many of you've opened up all ports or all security groups or whatever on your cloud just because it's easier, yeah? If stuff is hard, we basically try to bypass it as developers.
Daniel: The defense in depth is vital and we're only going to be calling, sort of looking at a very small part of security today, but we think it's a very important part and we're going to be saying sort of mind the gaps is key, particularly when you're dealing with end-to-end TLS, when you're like users communicating with a gateway, communicating with services.
Daniel: This is us. My name's Daniel. I work as a product architect at Datawire, and my colleague here on the stage is, Nic Jackson.
Nic Jackson: [inaudible 00:01:55] Nic Jackson, work at HashiCorp.
Daniel: And we don't want to scare you Nic, but...
Nic Jackson: Well we don't want to scare you, but we almost do, because I think some of the things that Daniel's mentioned, it's about security, and security is a real problem. But security is also something that we should take seriously, because security has very real threats. So some of the things, 214, that's records of data, personally identifiable data, which is stolen every second, right? That's fairly astounding.
Nic Jackson: And, 2.2% is the number of, the percentage of records that are stolen, which have been encrypted, all right, 65%. So identity theft. And one of the kind of the core trends that has been seen in the industry around sort of malicious actors and then data theft is that identity theft can actually generate a lot of revenue.
Nic Jackson: So if you're kind of thinking in the mindset that somebody is only after financial transactions, only after credit card numbers, there's actually money to be made just on names, addresses, personally identifiable data, which can be sold on the black market.
Nic Jackson: In terms of what this costs you as an organization, you're looking at on average in terms of a data breach of about $3,86 million, and that gets bigger. So if you're looking at a records of 50 million personally identifiable records that are lost, $350 million, which is staggering and this is going on and on and it's increasing.
Nic Jackson: So between last year and this year, there's been an increase of about 72% in terms of the number of attacks and breaches which have been reported and identified. And we didn't just make these numbers up. There's two really good reports which I recommend you read. Don't just take our word for this, but the Gemalto Breach Level Index, which is a really good kind of a report, again, free to read. And also the IBM study, which is cost of a data breach. You can pick both of those up off the Internet and we'll put those links in the slide deck so you can download later, but please check those out. Anybody concerned? Anybody worried?
Nic Jackson: Well, we all should be, right? But the key thing is some of the things and the techniques that we're going to show you today are going to allow you to reduce that attack surface and reduce the opportunity that a malicious actor will have to get access to your information.
Daniel: Thank you Nic.
Daniel: So security is a massive topic, genuinely massive topic. If you haven't sort of really covered it before or you're looking to learn more, I thoroughly recommend this book, Adam [Shostack 00:05:01], it dives in quite, you know, quite a deep level at threat modeling, but you can do threat modeling very cost effectively on your projects. May take a little while to learn the applications, it's like doing your good architecture or good code reviews. Doing good threat modeling is highly, highly valuable. I thoroughly encourage you to do this.
Daniel: We are going to make some assumptions in 35 minutes to be honest. We're going to assume that you've secured your data at rest and you've hardened your infrastructure, these kinds of things. We are mainly interested in data in motion here, yeah? One small part. Are your communications vulnerable? A whole bunch of breaches these days happen with bad dependencies, people basically can fire up a shell inside your perimeter and move laterally in your data center. That's where a lot of damage can really happen.
Daniel: As you bring in more and more different things to your stack, as you kind of modernize your applications, I generally see in consulting work that the network heterogeneity increases rapidly. Yeah, we know, we all know and love Kubernetes but you get some cloud in there, different flavors of cloud, your internal data center and I'm very much reminded of a bunch of mistakes I've made in my career in relation to the eight fallacies of distributed computing.
Daniel: If you haven't read this paper, it is a must read, yeah? Like honestly, I've made so many... I started my career working with Mongo and and making assumptions about my local disk storage versus cloud storage, bunch of problems like this. But when we do try and make our network more heterogeneous, we moved to the cloud, things like that, these things we're pulling out change a lot. We really need, as engineers, we need mechanical sympathy with the underlying infrastructure. We need to be able to understand key properties.
Daniel: And you can see number four there. One of the assumptions, one of the fallacies is, the network is secure. Yeah, we really need to think about these kinds of things. So as an example, sort of end-to-end comms, you imagine we've got like a user coming in the bottom or we've got say two data centers, played with some VMs, and we got some Kubernetes here. We've got a user coming in the bottom into a gateway, I'm using Edge Stack, Ambassador Labs. But you also want to secure the communication between services. We're using Console again, open source for service discovery here.
Daniel: You can mix and match your techs, you can choose Nginx traffic at the front. Istio, other things in there in that, in the mix. But we, because of the companies and technology we use, we've chosen these examples. But you also need to secure everything going across data centers, across clusters and identity between these kinds of different services. If your user services is talking to the account service, you want to make sure it's a genuine account service and not a hijacked account service, for example.
Daniel: These days, a bunch of assumptions we used to make are wrong. Things like Zero Trust Networks are really becoming a thing because, you can't guarantee there is not bad actors in your system. You know, we used to kind of look at the perimeter as the kind of the last line of defense and assume everyone inside was good. This is not the case anymore.
Daniel: Now, I'm not saying forget about the edge, the edge, it's super important. If I've learned one thing from Star Wars, and I've actually learned many things from Star Wars, but if I've learned one thing is always be on the lookout for one rat size holes in my edge defenses. This is a really key thing. But you've also got to combine your things, It's no good having a really strong front door and then a really weak kind of, you know, internal system. What we're trying to pitch here is you really need to think about this stuff end-to-end.
Daniel: Just a quick refresher, if you're looking at sort of definitions, API gateways, edge proxies, ADC like application delivery controllers often wrapped into similar kind of area. They're really about exposing internal services to end users. Yeah, this is a key thing. Typically, you know, by multiple domains, multiple kind of hosted sites. You're encapsulating the back ends. Your users should not care or not know what your systems are running on, be that Kubernetes, cloud, whatever. And there's a bunch of cross cutting concerns. The gateway focuses on TLS termination, end user off, maybe using [inaudible 00:09:07], Keycloak, or Sierra, that kind of thing, and rate limiting.
Daniel: You got bad actors, you know, trying to deny the service, these kind of things. You do want to stop at the edge. The big challenge with some app modernization is we're moving very much from a static world where we knew our IPs and we knew our kind of clusters to a highly dynamic multi cloud world with containers are coming up and down, pods are coming up and down, services are rapidly changing.
Daniel: From the service mesh or we also call it, proxy mesh. The Nginx folks call it the fabric model. You're really here exposing internal services to internal consumers, and you're encapsulating the service infrastructure again. If you're an internal developer, engineer, you don't really care where the other service is hosted, you want to write to it, yeah?
Daniel: You also want a bunch of cross functional sort of things to be done there, such as MTLS. That identity, that mutual like a sort of trust we've got, mutual identity verification between services. You want things probably like access control lists. You may want to define intentions, which Nic will cover in a moment. As in, I only intend the front end to be able to speak to the web server and not the database. So if the front end gets compromised, it can only do so much damage. You can only move laterally so far.
Daniel: We've mentioned the three pillars. Nic and I have chatted about this quite a bit. The three pillars of service meshes, and we've heard a bunch about service meshes at this conference. They are focused around observability, reliability, and security. Service meshes are not easy to run, we've had a go on a few clients, but they do offer a lot of value when done right, and we are very much focusing on security. Nic and I actually worked on a project about three or four years ago together using Console and and May Source at the time, Nic, wasn't it? And these kinds of things like getting our service authen, getting our internal encryption between services was really hard because it was a multi language stack. So pulling some of these things out of the language, out of the, as a added process makes a lot of sense. Using things like sidecar proxies.
Daniel: So, the animation, [inaudible 00:11:11]. You can imagine the arrows moving, but what I'm trying to say there is basically the kind of... The flow is, a user makes a request, it hits the gateway, it will then often kind of get forked. Maybe you're doing like a scatter gather kind of thing. The gateway talks to further downstream services. You can see the little sidecars there, the little proxies and then that service may in turn call other services to do its work.
Daniel: Now the red arrows are basically sort of points of attack. Clearly the front door is going to be under attack, it's exposed publicly, but if an attacker does get inside, all the other red arrows, sort of the gap between say maybe you terminating TLS at the edge and then not enabling TLS further into the cluster. All the comms going across data centers, you want to make sure they're secure with TLS because that traffic may go on the internet for example, people can be sniffing for that kind of thing.
Daniel: The bottom line is mind the gaps. And Nic's going to sort of point out a few areas now where we've seen people struggle with sort of full end-to-end encryption of traffic.
Nic Jackson: Cheers.
Nic Jackson: So just kind of recapping on the problem. And the problem is that we put a lot of trust into the perimeter and rightly so, firewalls are great. But the key thing is if you could, again, if you look at a lot of vulnerabilities which have been happening over the last couple of years, a lot of those things are happening inside of application frameworks. So very popular open source application frameworks, an attacker finds a vulnerability, they have an ability to do a remote code execution inside of your network, completely bypass the firewall. Now once they're inside your network, this is when they've got the ability to start moving laterally, as Daniel said earlier, and they've got the ability to start manipulating and inspecting your traffic.
Nic Jackson: So what we need to do to solve this is we kind of need to look at the problem not as a perimeter firewall, but we almost need each service to have a clearly defined bound of trust, a clearly defined internal service segment. And the service mesh really enables you to do this, and why do we need it? Why can't we just use the traditional methods? I mean, what are these traditional methods? Some you may not know, but network segmentation is kind of a traditional approach of organizing your network applications into areas of trust. You have areas of high trust and low trust and you strictly control the traffic between those two segments. This is not a new concept.
Nic Jackson: Network segmentation was conceptualized pretty much, well when NAT came out, when people realize that they could no longer protect their networks using a castle and moat, using four walls. But the problem when you start to look at network segmentation is you start to think about, well, we don't run single VMs anymore, we're running multi-tenanted applications in our [inaudible 00:14:21]. We're running many pods on many nodes, so we have to start to think deeper.
Nic Jackson: We have to start to think about in terms of service level segmentation, do I need to start categorizing my pods into areas of risk and areas of trust and controlling the traffic between the two to plug those gaps? Simple. Well, it's not. That's the core problem. The core problem is that we're working in a dynamic environment and the dynamic environment is great because the dynamic environment is what gives us reliability and availability and allows us to do all sorts of amazing deployments and, you know, the stuff that you know and love. Where it causes a problem with network security in the traditional sense is that network security is thinking in terms of fixed point network locations.
Nic Jackson: It's thinking in terms of a service running on a virtual machine at a known location, which is talking to another service at another known location. When you start to have schedulers, well you don't know what that location is. You've got overlay networks. The IP address of the source is potentially not going to be the IP address, so how do you build that firewall rule? How do you configure the routing tables to allow traffic between a pod, which you don't know where it exists, to another machine, which is on a completely different network? And you probably do this, you probably white list the entire cluster because it's just so difficult to do it other ways, but you've just opened up your network.
Nic Jackson: If I can exploit, find a remote code execution inside your front end application, if I can get inside your firewall, if I can move laterally, if you've got 50 million records, I can cause you \$350 million worth of financial damage. It's a very big and very real problem. It's also not that difficult to fix.
Daniel: Cheers Nic.
Daniel: So let's move on a little bit there in terms of control planes, so we definitely seen a case of when you're dealing with the edge, say [inaudible 00:16:34] developer dealing with the edge, it's a different use case than when you're dealing with east-West. I'll break it down a little bit more in that in just a second. But... First of all, I want to put like a fantastic article by Matt Klein on stage. Matt's the creator of Envoy, runs Lyft team. I hear a lot of talk about control planes and data planes, but people often don't take the time to explain. So Matt's done a fantastic article here, if you do want to know more about the difference between control planes and data planes, thoroughly recommend that article. But the key thing is the data plane, in our case, are the proxies, they're doing the heavy lifting.
Daniel: And both in Ambassador API Gateway, and Console, the proxies happened to be Envoy, which we know CNCF project. I'm sure many of you know and love. There's many other proxies out there, plenty in the vendor booth, you can go and have a look. But the key things are the things doing the heavy lifting are the data plane, the actual proxies themselves. They're being controlled and the telemetry is coming back to the control plane. Now [inaudible 00:17:32]. What I would say is North-South is fundamentally a bit different. In terms of ingress, there's unknown, untrusted people basically coming into your cluster. When you're dealing with internal traffic you pretty much can guarantee a certain level. Although we're saying be careful of course, but you can pretty much guarantee a certain level of trust or the way the things are operating. But when you expose something to the world, kind of all bets are off.
Daniel: You've got to be really quite defensive at the edge. There's also limited exposure of mapping, so the key thing here is you want to only expose selective endpoints of your services, and you probably have different sorts of personas working at the edge, but centralized ops kind of doing sensible ingress defaults, rate limiting, fault, these kinds of things. And then you have say product teams who are releasing functionality, maybe shadowing traffic, maybe canary launching and drip feeding traffic into new services. They are, those are kind of two distinct personas within the North-South kind of overall persona if you like. East-West, very dynamic as Nic pointed out, things are coming up going down all the time. Your east-West kind of solution has to bear that in mind. You really do need identity, things like MTLS, things like access control lists and you probably saw the two personas here are saying defaults.
Daniel: I actually learned this from Matt Klein. I know the Envoy, the way they configure envoy at Lyft by default when you roll it out, it has a bunch of same defaults that if your new service goes crazy it will kind of get contained, kind of get quarantined, which I thought was a really nice way to handle the inevitable mistakes we all make when, you know, you start making lots of danger team calls or tripping circuit breakers, these kinds of things the service mesh can prevent some of the damage leaking there.
Daniel: But the way you configure these things, the kind of the requirements, the personas involved I think at the moment are quite different. Hence why we are going to do a demo with Ambassador and Console Connect, but Istio and few other ones are trying to sort of marry the two together. A personal opinion, at the moment, I do think the use cases are sufficiently different to warrant different control planes, whether you're controlling North-South ingress or East-West service-to-service. But I'd love to hear your feedback on whether you think that's valid or not. I'm still learning myself on this one. I think that is time for the demo, Nic. [inaudible 00:19:55].
Nic Jackson: Awesome.
Nic Jackson: So demo, what can go wrong? Only everything. So what we want to show you is we've, you know, we've talked about the problem, but realistically we don't want to just want you to walk away like everything is doomed. We want you to walk away with a solution. So I'm going to quickly show you how we can use Console Connect service mesh with the Ambassador API Gateway to create [inaudible 00:20:26] into end TLS. So that TLS termination at Cloudflare, from Cloudflare through to our edge API gateway with Ambassador, from Ambassador into the Kubernetes cluster, to a destination service. So bare with me.
Nic Jackson: So how are we going to do this? So one of the things that we need to do is we do need to install Console and we need to install Ambassador. I've taken some artistic liberties in that I don't want you to watch me struggle to do that.
Nic Jackson: No seriously, it takes a little, a couple of minutes for that sort of stuff to spin up. So I do have Console up and running on my Kubernetes cluster, Helm chart, Helm install Console, and I do have my Ambassador API gateway installed inside of Kubernetes. So if I look at what's involved in installing Ambassador, it's a YAML file or Helm chart. I can literally just download this from the Ambassador website and apply it. So Ambassador uses a custom CRD to be able to allow you to configure the various different rules and reliability patterns and things like that. It's nothing unusual. It's something that you should be very, very familiar with. It has obviously some are back in order for the custom controller, but ultimately it then has the, the Ambassador control plane, which configures Envoy for you. So that's running on the server.
Nic Jackson: So I can see my Ambassador endpoint, and I'm just going to do Q proxy to be able to, to get access to the admin endpoint which is not exposed publicly. So this is going to just connect up. Better connect up.
Nic Jackson: And what you will see, this is the curse of the live demo, isn't it? So here's one I prepared earlier. So what you're going to see is the Ambassador, the Ambassador endpoint. And what we don't have in Ambassador right now is we don't have any routes. So we are going to... First thing we want to do once we've got Ambassador applied is we need to make some configuration. So there's a few things that we want to do in order to to configure Ambassador.
Nic Jackson: And one of the things that we want to do is we want to be able to enable the Console Connect integration. So again, I'm going to apply another Kubernetes set of resources that allows me to configure Ambassador to interact with the Console Connect service mesh. Once you've got that up and running, I need a Kubernetes service. So just a load balancer. This is going to expose my public endpoint, pretty standard stuff yet again. We do have some of the Ambassador annotations and that's just configuration for Ambassador. Where things get interesting is that I need to be able to configure my services or my service routes so... My terminal's died. Dear God.
Nic Jackson: So I know what's happened, right? Bear with me. Oh, thank you. So we can see we've got those routes there. We don't have anything running. So what I have is an upstream service, I have an [Emojifi 00:24:34] website, and it's our demo application and that has an API. The website is pure ReactJS, the web, the API is just a goal-based API. I need to expose both of those to the internet and I need to do it securely.
Nic Jackson: So with Ambassador, how to expose those to the internet is I need to be able to configure the routing, and to configure the routing I can use the Ambassador CRD. So I'm creating a mapping resource, I'm setting my prefix of slash so that's my route prefix, and I want that to filter on a host header of Emojifi.today, that allows me to use the same load balancer for sub domains or different domains. And I want to map it to a service inside of Console. So this is going to be using the Console Connect service catalog and being able to allow you to configure that MTLS identification and authorization.
Nic Jackson: So let me just apply those. All right. Thankfully my Internet's now working. So we had these two routes. I'm looking at the end, admin endpoint. I've now got the additional three routes which I've added. So the first three routes are just administration stuff, the last three are my user configured routes. So I have Emojifi.today, my API route, my Grafana, which I'm just mapping as a sub domain to a different service, and my main website. Now they're showing up as unhealthy at the moment because what I haven't done is I haven't configured the security policy for those services. So security policy in Console is a concept called intentions and intention allows or denies service traffic to flow between two different points.
Nic Jackson: Did you have any, did anybody go to see the SMI presentation the other night? So one of the things that we've done is that we've tried to make all of that configuration easier. So what I want to be able to do is I want to be Kubernetes centric. I want to be able to write Kubernetes resources to be able to configure my security policy, and I can do that using the SMI traffic targets. So let me just apply that. So that's applied my policy now. So I go to my website. Oh Jesus. I don't have access to my Kubernetes cluster yet again. I'm having network problems. Let me just show you.
Nic Jackson: So I'm going to apply those intentions and those intentions are going to just be that back mapping. So I'm going to do that with the SMI. So I do that for the three intentions that I need. I need to explicitly state that Ambassador is allowed to talk to Grafana, it's allowed to talk to the website, and it's allowed to talk to the API server. Once those are being applied, then the service traffic starts to flow.
Nic Jackson: So you go from that denied phase, which you're getting here from this just sort of bad error to a situation where the application is now allowed to talk to the traffics or, the Ambassador API Gateway is allowed to talk to the upstream services and we can see that it works. We're going to do a quick test on that.
Nic Jackson: What this is doing is this is a pure ReactJS website. So the API server is running go, but there is no communication like directly to between the API and the website. When it comes to looking at things like... And there we go, that's just configured.
Nic Jackson: So when it comes to to things like the metrics and the observability, some of the sort of the problems that you have around metrics and observability is that, you know, things like network statistics can be actually pretty tedious to collect inside your application. They can also be pretty difficult to collect. But when you start to use the service mesh, one of the thing that service mesh gives you is that the service mesh gives you access to those statistics. So before you can see there everything is blue because I didn't have any traffic which was allowed to flow, but I can start seeing the metrics around my ingress service. Now everything is starting to go green. I'm going to see things like my request counts, I'm going to see my API request times. I'm getting all of that measurement and observability and I'm getting that for absolutely free. I've done nothing other than than just enable the service mesh.
Nic Jackson: And again, I know my network was running slow, but I did apply that SMI policy and you can see that the policy is all there now. This policy is that gives you that explicit link. Ambassador, at the edge. I have my Cloudflare origin certificate, right? So I have absolute no gap inside of any of that request flow where any traffic is not being covered by the TLS process. Termination Cloudflare, TLS. Cloudflare makes a TLS call to my origin server, again, using the specific certificate that I've configured with Ambassador. Ambassador makes MTLS authenticated and identified calls to my upstream services.
Nic Jackson: If anybody breaks into my network, unless they've got that explicit configured permission, they can't manipulate any of the network requests, they can't inspect any of it. And if I had a decent network connection and my Wifi wasn't a little bit dodgy, it was pretty easy to set up.
Daniel: Is the code online, Nic, as well? Will you push, get out the latest version?
Nic Jackson: I'll push all of the sample files to GitHub, yeah.
Daniel: Awesome. So if you do want to play with this, I can appreciate some of its kind of hard to follow along with sometimes just by its nature, but once you've got it sort of the mental model like it definitely is quite, as you've said, Nic, it's quite easy.
Daniel: But I find having a look at the code is definitely the way to go. So pop along to GitHub, you can grab that and the kind of tl;dr is just wrap up briefly. Pretty much what I said at the start. We are seeing this kind of, you know, hybrid sort of journey [inaudible 00:32:05] data center to cloud, you got VMs, you got Kubernetes in the mix.
Daniel: Decoupling infrastructure and apps is a good thing, but you do need to do it incrementally and critically. You need to do it securely and that is a key thing. Nic has definitely reminded me several times that security really has to have good UX and like with SMI, what Nic was demoing there, we're trying to make these things as easy as possible to apply policies within like the actual data center itself within the East-West traffic and you also, with your ingress, have convicted a fine various sort of security properties there too. Defense in depth is vital. Don't forget to secure your apps. Do scan your dependencies, do scan your ram, your containers, your stuff you're deploying, but don't forget about the network.
Daniel: There's a lot of things around CICD at the moment, continuous delivery in particular for scanning all your components, but don't forget about the runtime.
Daniel: When you deploy things, make sure your platform is secure and make sure the comms between all the components are secure too and do mind the gaps. It's all too easy in systems to have a gap where TLS is being terminated and it's not being spun up until further down or further upstream if you like. You need to ensure that everything is TLS or loop back adapter to basically minimize any gaps where people can pop in and sniff traffic. And at that point, thank you so much. I'm not sure how we're doing for time, but if we've got any questions we're happy to take questions. Thank you.
Speaker 3: Hello. I've got a question. What should be done with your demo to make [inaudible 00:33:51] compatible with Zero Trust [inaudible 00:33:48]?
Speaker 3: Hello. I've got a question. What should be done with your demo to make [inaudible 00:33:51] compatible with Zero Trust [inaudible 00:33:48]?
Daniel: The quest- well, it's a good question. Well the question is, what should be done with the demo to make it fully compatible with zero trust networking principles? That is probably an offline conversation. It's, as I said, quite a deep question or you got any?
Nic Jackson: Well we are, we're doing our very best. So what we're doing is we are assuming that every inbound communication is malicious, we're assuming that even identifiable inbound communication has malicious intent. So we're trying to sort of cover those bases. What we're trying to reduce the opportunity for somebody to introduce malicious traffic by securing the network but also sort of by protecting the container that the network is running in.
Nic Jackson: I mean simple container security stuff, Liz Rice has got some wonderful talks on this, but even if I break into an application, the key thing is what you're trying to do is stop people move laterally. You do that in a number of ways. The first is, you secure your perimeters, but the second is, you secure your containers. You don't allow somebody to install tools or download binaries that they can then use to further exploits. Zero Trust is a very, very, very sort of broad concept but the title of it really does say it all. Like trust nobody, not even your sort of closest friends or sort of neighbors in the case of applications and services, not real life because your friends and neighbors are probably okay.
Daniel: Hopefully that helps him. Come have a chat to us afterwards because it's like, it's a very deep topic. If anyone's looking for more information to read on this, that Google Zero Trust Networks is a fantastic book on a O'Reilly about this and there's also the Beyond Corp Project with Google that's well worth a look into these things.
Daniel: But based, as Nic said, that the title is self descriptive, Nic, and in terms of trust no one, there is literally no, sort of, there is zero trust with these things. It's a good question. Thank you. Yeah.
Nic Jackson: I think just to kind of end on that as well that you can't do, you can't cover every gap. You can't stop everything. What you want to do is make things difficult, make it as difficult as possible. In some ways make it more difficult than your nearest neighbor so that the attacker will go and bother them. But security is incremental, it's sort of, it covers a number of areas, but simple techniques like network security when you use the right tooling shouldn't be really something that you have to think about. It should just be something that's very, very easy and simple to apply. No reason not to do it. Thanks so much.
Daniel: Thanks a lot.