Crystal tenn

• Virtual Network: logically isolated network on Azure that enables the flow of communication between different resources/subnets/VMs, other VNets, or to on-premises (depending on how you set it up). This is the foundation of Azure Networking and it is a Layer-3 overlay. Virtual networks are segmented into one or more subnets. Limitation of VNets: VNets cannot span regions or subscriptions. VNet Peering, ExpressRoute, or VNet-to-VNet can connect regions or subscriptions together, however.

• Network Security Group: controls ingress and egress traffic (allows or denies) to your Azure resources such as a NIC or subnet.

Think of a network security group as a cloud-level firewall for your network. Prioritized set of rules based on a 5-tuple rule-set: source + destination IP, source + destination port, and protocol. Can expose only certain ports of a subnet or NIC to the Internet as well as secure the flow of traffic between subnets/NICs on the same subnet. Stateful rules and will keep track of your requests.

• Route Tables: A route table contains a set of rules, called routes, that specifies how packets should be routed in a virtual network. Route tables are associated to subnets, and each packet leaving a subnet is handled based on the associated route table. Each route table can be associated to multiple subnets, but a subnet can only be associated to a single route table.

Azure Container Instances offers the fastest and simplest way to run a container in Azure, without having to provision any virtual machines or learning new tools—it’s just your application, in a container, running in the cloud. With Azure Container Instances, you can easily run containers with a single command. Wide spectrum of scenarios including batch processing, continuous integration, and event-driven computing. We hear consistently from customers that ACI is uniquely suited to handle their burst workloads. ACI supports quick, cleanly packaged burst compute that removes the overhead of managing cluster machines. Some of our largest customers are also using ACI for data processing where source data is ingested, processed, and placed in a durable store such as Azure Blob Storage. By processing the data with ACI rather than statically provisioned virtual machines, you can achieve significant cost savings due to ACI’s granular per-second billing.

Just pull container images from Docker Hub or a private Azure Container Registry, and Web App for Containers will deploy the containerized app with your preferred dependencies to production in seconds. The platform automatically takes care of OS patching, capacity provisioning, and load balancing. Can be used for simple Web Apps that require scaling, do not require orchestration, and have great cost savings.

The cluster autoscaler (CA) (PREVIEW 01/2019) can scale your agent nodes based on pending pods. It scans the cluster periodically to check for pending pods or empty nodes and increases the size if possible. By default, the CA scans for pending pods every 10 seconds and removes a node if it’s unneeded for more than 10 minutes. When used with the horizontal pod autoscaler (HPA), the HPA will update pod replicas and resources as per demand. If there aren’t enough nodes or unneeded nodes following this pod scaling, the CA will respond and schedule the pods on the new set of nodes.

Best suited for: large enterprise micro-service architectures that need to be able to be scaled on demand quickly and need as close to 100% up-time as possible and want to be able to have rolling updates with no downtime. Usually front-facing customer applications. At the moment, best for .NET Core (cross platform) on Linux containers or other general Linux container workloads.

Part 1 (on your local laptop): When you request Google from a browser: The browser creates a GET request. The Presentation layer encodes it to HTTP/S. Session layer opens a connection from you and the server. The GET request gets put into a TCP packet. Then the Network layer figures out the next IP it needs to send it to, in this case your router. The Data link (my computer’s Network Interface Card = NIC) converts it to electrical signals. The Physical layer transmits the TCP packet over the air from my computer to the router over WiFi (or to your router via a physical wire if you set that up).

Part 2 (on your router):When the router receives the TCP packet, it’s over the WiFi Physical layer ( most routers today have WiFi built it, however it used to be a separate box). Then it has to translate this Physical layer from electrical signals (Physical) into bits (Network) with the router’s NIC. Then the Network layer (router) figures out based on the destination IP address where to send it to next. It transfers the data back into electrical signals and sends it out the physical cable to your cable company and onwards.

There are apps an end-user can utilize that will work at the Application layer. As an end-user you only see the interface Edge. The app itself, such as Edge, can utilize for example .NET Framework’s HttpClient in order to work with the networking Application layer. (other browsers and things have other development frameworks that can work with the Application layer).

The Transport layer is a connection between two servers, so in this case, your computer to Google’s servers. Transport layer protocols are Transmission Control Protocol (TCP), User Datagram Protocol (UDP). TCP provides apps a way to deliver (and receive) an ordered and error-checked stream of information packets over the network. TCP is slower due to its error checking and needed if you want to ensure things (like files) are delivered without issues/errors. The User Datagram Protocol (UDP) is used by apps to deliver a faster stream of information by doing away with error-checking. This is used more often for video streaming or gaming where you need faster speed/performance and can handle a little bit of errors.

The Network layer is IP and handles routing packet. A packet is a chunk of data. It will route data based on logical addressing. Routers are part of the Network layer because when you send something, there are multiple hops it must make, so the network layer handles determining which path the data should take (which hops to do) based on network conditions, service priority, and more. if there are traffic issues it can also handle switching and re-routing packets.