IPv4
[0-255].[0-255].[0-255].[0-255]
unique across the whole web, can be geo-located easily
**Private Network: **
- everyone in the network can talk to each other
- unique across private network
- machines connect to WWW using a NAT+internet gateway(a proxy)
Elastic IP
- When you stop and then start an EC2 instance, it can change its public IP. If you need to have a fixed public IP for your instance, you need an Elastic IP
- An Elastic IP is a public IPv4 IP you own as long as you don't delete it. You can attach it to one instance at a time
- Try avoid using elastic IP, bc often reflect poor architectural decisions. instead, use a random public Ip and register a DNS name to it
By default, your EC2 machine comes with:
- A private IP for the internal AWS Network
- A public IP, for the WWW.
When we are doing SSH into our EC2 machines:
- We can't use a private IP, because we are not in the same network. We can only use the public IP.
Placement Groups
- To control over the EC2 Instance placement strategy, we use placement groups.
- strategies for the group:
-
Cluster-clusters instances into a low-latency group in a single Availability Zone. every instance is on the same rack/hardware.
- Pros: great network
- Cons: if the rack fails, all instances fails at the same time
- Use case: big data job that needs to complete fast, or app needs extremely low latency and high network throughput
-
Spread - spreads instances across underlying hardware (max 7 instances per group per AZ)-critical applications. each instance is on separate hardware
-
Pros: can span across AZ, reduce risk, EC2 instances are on diff physical hardware
-
Cons: limited to 7 instances per group per AZ
-
Use case: too that need max high availability, or critical app where each instance must be isolated from failure from each other
-
Partition- spreads instances across many different partitions (which rely on different sets of racks) within an AZ, Scales to l00s of EC2 instances per group(Hadoop, Cassandra, Kafka)
- Up to 7 partitions per AZ. Can span across multiple AZs in the same region. Up to 100s of EC2 instances. The instances in a partition do not share racks with the instances in the other partitions
- A partition failure can affect many EC2 but won't affect other partitionsEC2 instances get access to the partition information as metadata
- Use cases: HDFS, HBase, Cassandra,Kafka
-