Cloudstack basic network setup




















Cluster name. Enter a name for the cluster. This can be text of your choosing and is not used by CloudStack. Guest CIDR. For example, As a matter of good practice you should set different CIDRs for different zones. This will make it easier to set up VPNs between networks in different zones. A range of IP addresses that are assumed to be accessible from the Internet and will be allocated for access to guest networks.

Log in to the CloudStack UI. In Zones, click View More, then click the zone to which you want to add a pod. Click the Compute and Storage tab. In the Pods node of the diagram, click View All. Click Add Pod.

Enter the following details in the dialog. The name of the pod. Click OK. Click the Compute tab. In the Clusters node of the diagram, click View All. Click Add Cluster. Choose the hypervisor type for this cluster. Choose the pod in which you want to create the cluster. Follow these requirements: Do not put more than 8 hosts in a vSphere cluster Make sure the hypervisor hosts do not have any VMs already running before you add them to CloudStack.

Log in to the UI. Click View Clusters. In Hypervisor, choose VMware. Nexus dvSwitch Password: The password associated with the username specified above. Warning Be sure you have performed the additional CloudStack-specific configuration steps described in the hypervisor installation section for your particular hypervisor. Note When copying and pasting a command, be sure the command has pasted as a single line before executing.

Log in to the CloudStack UI as administrator. In the Clusters node, click View All. Click the cluster where you want to add the host. Click View Hosts. Click Add Host. Provide the following information. Host Name. Usually root. Host Tags Optional. Any labels that you use to categorize hosts for ease of maintenance. Repeat for additional hosts. The storage server should be a machine with a large number of disks.

The disks should ideally be managed by a hardware RAID controller. Minimum required capacity depends on your needs. When setting up primary storage, follow these restrictions: Primary storage cannot be added until a host has been added to the cluster.

If you do not provision shared primary storage, you must set the global configuration parameter system. Warning When using preallocated storage for primary storage, be sure there is nothing on the storage ex. Each Secondary Storage server must be available to all hosts in the zone. Warning Ensure that nothing is stored on the server.

Warning Heterogeneous Secondary Storage is not supported in Regions. Warning Even if the UI allows you to uncheck this box, do not do so. In the left navigation bar, click Infrastructure. In Secondary Storage, click View All. Fill out the dialog box fields, then click OK: Zone. NFS server. Go to the Instances tab, and filter by My Instances. Click Add Instance and follow the steps in the wizard. Choose the zone you just added. In the template selection, choose the template to use in the VM.

If this is a fresh installation, likely only the provided CentOS template is available. Select a service offering. Be sure that the hardware you have allows starting the selected service offering. In data disk offering, if desired, add another data disk. This is a second volume that will be available to but not mounted in the guest. A reboot is not required if you have a PV-enabled OS kernel in use. In default network, choose the primary network for the guest.

In a trial installation, you would have only one option here. Optionally give your VM a name and a group. Use any descriptive text you would like. Click Launch VM. Your VM will be created and started. It might take some time to download the template and complete the VM startup.

You have successfully completed a CloudStack Installation. Field Value management. This variable must be set for deployments that use vSphere.

It is recommended to be set for other deployments as well. Example: This defaults to false. Set it to true if you would like CloudStack to enable multipath. However, this does not impact NFS operation and is harmless.

This is a comma-separated list of CIDRs. Other URLs will go through the public interface. We suggest you set this to 1 or 2 hardened internal machines where you keep your templates. For example, set it to By default CloudStack will not use this storage. You should change this to true if you want to use local storage and you understand the reliability and feature drawbacks to choosing local storage.

If you are using multiple Management Servers you should enter a load balanced IP address that is reachable via the private network.

The limit applies at the cloud level and can vary from cloud to cloud. You can override this with a lower value on a particular API call by using the page and pagesize API command parameters. Default: These hosts will be used only for HA-enabled VMs that are restarting due to the failure of another host. Specify the ha. The default value is 20 minutes.

Increase the timeout value to avoid timeout errors in VMware deployments because certain VMware operations take more than 20 minutes. Log in to the UI as administrator. In the left navigation bar, click Global Settings. In Select View, choose one of the following: Global Settings. This displays a list of the parameters with brief descriptions and current values. Hypervisor Capabilities. This displays a list of hypervisor versions with the maximum number of guests supported for each.

Use the search box to narrow down the list to those you are interested in. In the Actions column, click the Edit icon to modify a value. If you are viewing Hypervisor Capabilities, you must click the name of the hypervisor first to display the editing screen.

In the left navigation bar, click Infrastructure or Accounts, depending on where you want to set a value. Find the name of the particular resource that you want to work with. Click the name of the resource where you want to set a limit. Click the Settings tab. Field Field Value account remote. Keep the corresponding notification threshold lower than this value to be notified beforehand.

If you want to enable security groups for guest traffic isolation, choose this. The exported path from the server. Tags optional. The comma-separated list of tags for this storage device.

It should be an equivalent set or superset of the tags on your disk offerings. Target IQN. The IQN of the target. For example, iqn. The LUN number. For example, 3. SR Name-Label. Enter the name-label of the SR that has been set up outside CloudStack.

The path on each host that is where this primary storage is mounted. A combination of the datacenter name and the datastore name. This is used to protect your internal network from rogue attempts to download arbitrary files using the template download feature.

Determines whether CloudStack will use storage that is local to the Host for data disks, templates, and snapshots. This is the IP address of the Management Server. The label you want to use throughout the cloud to designate certain hosts as dedicated HA hosts. Determines the vCenter session timeout value by using this parameter. If true and if an account has one or more dedicated public IP ranges, IPs are acquired from the system pool after all the IPs dedicated to the account have been consumed.

The percentage, as a value between 0 and 1, of allocated storage utilization above which alerts are sent that the storage is below the threshold. The percentage, as a value between 0 and 1, of storage utilization above which alerts are sent that the available storage is below the threshold. The percentage, as a value between 0 and 1, of cpu utilization above which alerts are sent that the available CPU is below the threshold.

The percentage, as a value between 0 and 1, of memory utilization above which alerts are sent that the available memory is below the threshold. The percentage, as a value between 0 and 1, of CPU utilization above which allocators will disable that cluster from further usage. Users can create additional Security Groups at any time, however existing Guest Instances cannot be assigned to newly created Security Groups, they have to be allocated to a Security Group when they are created.

Guest Instances within a Security Group are able to communicate with each other directly. Ingress and Egress rules on the Security Group control the flow of traffic, both in and out of the Group. This was an introduction to Networking in CloudStack 3. There is lots more to CloudStack Networking as it is one of the key differentiators between simple Virtual Private Servers, and a true Cloud offering.

Layer 2 Networks in CloudStack. Working towards CloudStack zero downtime upgrades. For example, if the VM is deployed for a Web service, it should have the Web server running, the database connected, and so on.

Compute offering : A predefined set of virtual hardware attributes, including CPU speed, number of CPUs, and RAM size, that the user can select when creating a new virtual machine instance. Choose one of the compute offerings to be used while provisioning a VM instance as part of scaleup action.

Min Instance : The minimum number of active VM instances that is assigned to a load balancing rule. The active VM instances are the application instances that are up and serving the traffic, and are being load balanced. This parameter ensures that a load balancing rule has at least the configured number of active VM instances are available to serve the traffic. If an application, such as SAP, running on a VM instance is down for some reason, the VM is then not counted as part of Min Instance parameter, and the AutoScale feature initiates a scaleup action if the number of active VM instances is below the configured value.

Similarly, when an application instance comes up from its earlier down state, this application instance is counted as part of the active instance count and the AutoScale process initiates a scaledown action when the active instance count breaches the Max instance value.

Max Instance : Maximum number of active VM instances that should be assigned to a load balancing rule. This parameter defines the upper limit of active VM instances that can be assigned to a load balancing rule. Specifying a large value for the maximum instance parameter might result in provisioning large number of VM instances, which in turn leads to a single load balancing rule exhausting the VM instances limit specified at the account or domain level.

So there may be scenarios where the number of VMs provisioned for a scaleup action might be more than the configured Max Instance value. Once the application instances in the VMs are up from an earlier down state, the AutoScale feature starts aligning to the configured Max Instance value.

Additionally, if you want to configure the advanced settings, click Show advanced settings, and specify the following:. When the AutoScale configuration is disabled, no scaleup or scaledown action is performed. You can use this downtime for the maintenance activities. The button toggles between enable and disable, depending on whether AutoScale is currently enabled or not. After the maintenance operations are done, you can enable the AutoScale configuration back.

You can update the various parameters and add or delete the conditions in a scaleup or scaledown rule. Before you update an AutoScale configuration, ensure that you disable the AutoScale load balancer rule by clicking the Disable AutoScale button. After you modify the required AutoScale parameters, click Apply. In order to support this functionality, region level services and service provider are introduced.

Based on the nature of deployment, GSLB represents a set of technologies that is used for various purposes, such as load sharing, disaster recovery, performance, and legal obligations. With GSLB, workloads can be distributed across multiple data centers situated at geographically separated locations. GSLB can also provide an alternate location for accessing a resource in the event of a failure, or to provide a means of shifting traffic easily to simplify maintenance, or both.

Global server load balancing is used to manage the traffic flow to a web site hosted on two separate zones that ideally are in different geographic locations.

The following is an illustration of how GLSB functionality is provided in CloudStack: An organization, xyztelco, has set up a public cloud that spans two zones, Zone-1 and Zone-2, across geographically separated data centers that are managed by CloudStack.

Tenant-A of the cloud launches a highly available solution by using xyztelco cloud. CloudStack orchestrates setting up a virtual server on the LB service provider in Zone Virtual server 1 that is set up on the LB service provider in Zone-1 represents a publicly accessible virtual server that client reaches at IP Virtual server 2 that is setup on the LB service provider in Zone-2 represents a publicly accessible virtual server that client reaches at IP At this point Tenant-A has the service enabled in both the zones, but has no means to set up a disaster recovery plan if one of the zone fails.

Additionally, there is no way for Tenant-A to load balance the traffic intelligently to one of the zones based on load, proximity and so on. The cloud administrator of xyztelco provisions a GSLB service provider to both the zones. The cloud admin enables GSLB as a service to the tenants that use zones 1 and 2. The domain name is provided as A. GSLB virtual server 1 is configured to start monitoring the health of virtual server 1 and 2 in Zone GSLB virtual server 2 is configured to start monitoring the health of virtual server 1 and 2.

CloudStack will bind the domain A. At this point, Tenant-A service will be globally reachable at A. The private DNS server for the domain xyztelcom.

A client when sends a DNS request to resolve A. Depending on the health of the virtual servers being load balanced, DNS request for the domain will be resolved to the public IP associated with the selected virtual server. To configure a GSLB deployment, you must first configure a standard load balancing setup for each zone. This enables you to balance load across the different servers in each zone in the region.

Finally, bind the domain to the GSLB virtual servers. The GSLB configurations on the two appliances at the two different zones are identical, although each sites load-balancing configuration is specific to that site.

Perform the following as a cloud administrator. As per the example given above, the administrator of xyztelco is the one who sets up GSLB:. In the cloud. Bind domain name to GSLB virtual server. Domain name is obtained from the domain details. The users can load balance traffic across the availability zones in the same region or different regions. The users can specify an unique name across the cloud for a globally load balanced service. The provided name is used as the domain name under the DNS name associated with the cloud.

The user-provided name along with the admin-provided DNS name is used to produce a globally resolvable FQDN for the globally load balanced service of the user. For example, if the admin has configured xyztelco. The user shall be able to set weight to zone-level virtual server. Weight shall be considered by the load balancing method for distributing the traffic.

The GSLB functionality shall support session persistence, where series of client requests for particular domain name is sent to a virtual server on the same zone. Currently, CloudStack does not support orchestration of services across the zones. The notion of services and service providers in region are to be introduced. The IP ranges for guest network traffic are set on a per-account basis by the user. This allows the users to configure their network in a fashion that will enable VPN linking between their guest network and their clients.

In shared networks in Basic zone and Security Group-enabled Advanced networks, you will have the flexibility to add multiple guest IP ranges from different subnets. You can add or remove one IP range at a time. If you want Portable IP click Yes in the confirmation dialog. If you want a normal Public IP click No. When the last rule for an IP address is removed, you can release that IP address. If a guest VM is part of more than one network, static NAT rules will function only if they are defined on the default network.

Click the Static NAT button. By default, all incoming traffic to the public IP address is rejected. All outgoing traffic from the guests is also blocked by default. For example, you can use a firewall rule to open a range of ports on the public IP address, such as 33 through Then use port forwarding rules to direct traffic from individual ports within that range to specific ports on user VMs.

By default, all incoming traffic to the public IP address is rejected by the firewall. To allow external traffic, you can open firewall ports by specifying firewall rules. This is useful when you want to allow only incoming requests from certain IP addresses. You cannot use firewall rules to open ports for an elastic IP address. When elastic IP is used, outside access is instead controlled through the use of security groups.

In an advanced zone, you can also create egress firewall rules by using the virtual router. This tab is not displayed by default when CloudStack is installed. To display the Firewall tab, the CloudStack administrator must set the global configuration parameter firewall. The egress traffic originates from a private network to a public network, such as the Internet. By default, the egress traffic is blocked in default network offerings, so no outgoing traffic is allowed from a guest network to the Internet.

However, you can control the egress traffic in an Advanced zone by creating egress firewall rules. When an egress firewall rule is applied, the traffic specific to the rule is allowed and the remaining traffic is blocked.

When all the firewall rules are removed the default policy, Block, is applied. To add an egress rule, click the Egress rules tab and fill out the following fields to specify what type of traffic is allowed to be sent out of VM instances in this guest network:. The default egress policy for Isolated guest network is configured by using Network offering.

Use the create network offering option to determine whether the default policy should be block or allow all the traffic to the public network from a guest network.

Use this network offering to create the network. If no policy is specified, by default all the traffic is allowed from the guest network that you create by using this network offering.

If you select Allow for a network offering, by default egress traffic is allowed. However, when an egress rule is configured for a guest network, rules are applied to block the specified traffic and rest are allowed. If no egress rules are configured for the network, egress traffic is accepted. If you select Deny for a network offering, by default egress traffic for the guest network is blocked. However, when an egress rules is configured for a guest network, rules are applied to allow the specified traffic.

While implementing a guest network, CloudStack adds the firewall egress rule specific to the default egress policy for the guest network. A port forward service is a set of port forwarding rules that define a policy. A port forward service is then applied to one or more guest VMs. The guest VM then has its inbound network access managed according to the policy defined by the port forwarding service. This is useful when you want to allow only incoming requests from certain IP addresses to be forwarded.

A guest VM can be in any number of port forward services. Port forward services can be defined but have no members. If a guest VM is part of more than one network, port forwarding rules will function only if they are defined on the default network. You cannot use port forwarding to open ports for an elastic IP address. See Security Groups. The user may choose to associate the same public IP for multiple guests.

CloudStack implements a TCP-level load balancer with the following policies. CloudStack account owners can create virtual private networks VPN to access their virtual machines. If the guest network is instantiated from a network offering that offers the Remote Access VPN service, the virtual router based on the System VM is used to provide the service. Since each network gets its own virtual router, VPNs are not shared across the networks.

The account owner can create and manage users for their VPN. CloudStack does not use its account database for this purpose but uses a separate table. Make sure that not all traffic goes through the VPN. That is, the route installed by the VPN should be only for the guest network and not for all traffic. Click the Enable VPN button. A Site-to-Site VPN connection helps you establish a secure connection from an enterprise datacenter to the cloud infrastructure.

This allows users to access the guest VMs by establishing a VPN connection to the virtual router of the account from a device in the datacenter of the enterprise.

You can also establish a secure connection between two VPC setups or high availability zones in your environment. The difference from Remote VPN is that Site-to-site VPNs connects entire networks to each other, for example, connecting a branch office network to a company headquarters network. In addition to the specific Cisco and Juniper devices listed above, the expectation is that any Cisco or Juniper device running on the supported operating systems are able to establish VPN connections. If the receiving peer is able to create the same hash independently by using its Preshared key, it knows that both peers must share the same secret, thus authenticating the customer gateway.

Authentication is accomplished through the Preshared Keys. The phase-1 is the first phase in the IKE process. In this initial negotiation phase, the two VPN endpoints agree on the methods to be used to provide security for the underlying IP traffic. The phase-1 authenticates the two VPN gateways to each other, by confirming that the remote gateway has a matching Preshared Key.

IKE DH : A public-key cryptography protocol which allows two parties to establish a shared secret over an insecure communications channel. The supported options are None, Group-5 bit and Group-2 bit. The phase-2 is the second phase in the IKE process. In phase-2, new keying material is extracted from the Diffie-Hellman key exchange in phase-1, to provide session keys to use in protecting the VPN data flow.

Perfect Forward Secrecy : Perfect Forward Secrecy or PFS is the property that ensures that a session key derived from a set of long-term public and private keys will not be compromised. This property enforces a new Diffie-Hellman key exchange. It provides the keying material that has greater key material life and thereby greater resistance to cryptographic attacks. The available options are None, Group-5 bit and Group-2 bit.

The security of the key exchanges increase as the DH groups grow larger, as does the time of the exchanges. When PFS is turned on, for every negotiation of a new phase-2 SA the two gateways must generate a new set of phase-1 keys. IKE Lifetime seconds : The phase-1 lifetime of the security association in seconds. Default is seconds 1 day. Whenever the time expires, a new phase-1 exchange is performed. ESP Lifetime seconds : The phase-2 lifetime of the security association in seconds.

Default is seconds 1 hour. Whenever the value is exceeded, a re-key is initiated to provide a new IPsec encryption and authentication session keys.

Select this option if you want the virtual router to query the liveliness of its IKE peer at regular intervals. Within a few moments, the VPN gateway is created. You will be prompted to view the details of the VPN gateway you have created.

Click Yes to confirm. Select Passive if you want to establish a connection between two VPC virtual routers. If you want to establish a connection between two VPC virtual routers, select Passive only on one of the VPC virtual routers, which waits for the other VPC virtual router to initiate the connection.

Do not select Passive on the VPC virtual router that initiates the connection. CloudStack provides you with the ability to establish a site-to-site VPN connection between CloudStack virtual routers. Ensure that the customer gateway is pointed to VPC B. The VPN connection is shown in the Disconnected state.

Ensure that the customer gateway is pointed to VPC A. Wait for few seconds. The default is 30 seconds for both the VPN connections to show the Connected state. This feature enables you to build Virtual Private Clouds VPC , an isolated segment of your cloud, that can hold multi-tier applications. These tiers are deployed on different VLANs that can communicate with each other.

Such segmentation by means of VLANs logically separate application VMs for higher security and lower broadcasts, while remaining physically connected to the same device. The administrator can allow users create their own VPC and deploy the application. Both administrators and users can create multiple VPCs. The administrator can create the following gateways to send to or receive traffic from the VMs:. Both administrators and users can create various possible destinations-gateway combinations.

However, only one gateway of each type can be used in a deployment. A VPC can have its own virtual network topology that resembles a traditional physical network. You can launch VMs in the virtual network that can have private addresses in the range of your choice, for example: You can define network tiers within your VPC network range, which in turn enables you to group similar kinds of instances based on IP address range.

For example, if a VPC has the private range Tiers are distinct locations within a VPC that act as isolated networks, which do not have access to other tiers by default. Tiers are set up on different VLANs that can communicate with each other by using a virtual router. Tiers provide inexpensive, low latency network connectivity to other tiers within the VPC. If you have already created tiers, the VPC diagram is displayed.

Click Create Tier to add a new tier. By default, all incoming traffic to the guest networks is blocked and all outgoing traffic from guest networks is allowed, once you add an ACL rule for outgoing traffic, then only outgoing traffic specified in this ACL rule is allowed, the rest is blocked. To open the ports, you must create a new network ACL. Network ACL items are nothing but numbered rules that are evaluated in order, starting with the lowest numbered rule.

These rules determine whether traffic is allowed in or out of any tier associated with the network ACL. Each tier can be associated with only one ACL. Default behavior is all the incoming traffic is blocked and outgoing traffic is allowed from the tiers. Default network ACL cannot be removed or modified. Contents of the default Network ACL is:.

Click the appropriate button in the Details tab. A private gateway can be added by the root admin only. You can configure multiple private gateways to a single VPC. Click the Configure button of the VPC to which you want to configure load balancing rules. Physical Network : The physical network you have created in the zone.

Gateway : The gateway through which the traffic is routed to and from the VPC. By default, all the traffic is blocked. The new gateway appears in the list. You can repeat these steps to add more gateway for this VPC.

The Source NAT service on a private gateway can be enabled while adding the private gateway. On deletion of a private gateway, source NAT rules specific to the private gateway are deleted. The ACLs contains both allow and deny rules.

As per the rule, all the ingress traffic to the private gateway interface and all the egress traffic out from the private gateway interface are blocked. You can change this default behaviour while creating a private gateway.

Alternatively, you can do the following:. CloudStack enables you to specify routing for the VPN connection you create. You can enter one or CIDR addresses to indicate which traffic is to be routed back to the gateway. CloudStack enables you to block a list of routes so that they are not assigned to any of the VPC private gateways. Specify the list of routes that you want to blacklist in the blacklisted.

Note that the parameter update affects only new static route creations. If you block an existing static route, it remains intact and continue functioning. You cannot add a static route if the route is blacklisted for the zone. Follow the on-screen instruction to add an instance. For information on adding an instance, see the Installation Guide. With this feature, VMs deployed in a multi-tier application can receive monitoring services via a shared network provided by a service provider.

The IPs are associated to the guest network only when the first port-forwarding, load balancing, or Static NAT rule is created for the IP or the network.

You are prompted for confirmation because, typically, IP addresses are a limited resource. The IP address is a limited resource. If you no longer need a particular IP, you can disassociate it from its VPC and return it to the pool of available addresses. An IP address can be released from its tier, only when all the networking port forwarding, load balancing, or StaticNAT rules are removed for this IP address.

In the Details tab, click the Release IP button. The traffic is load balanced within a tier based on your configuration. When you use internal LB service, traffic received at a tier is load balanced across different VMs within that tier. For example, traffic reached at Web tier is redirected to another VM in that tier. External load balancing devices are not supported for internal LB.

The service is provided by a internal LB VM configured on the target tier. A CloudStack user or administrator may create load balancing rules that balance traffic received at a public IP to one or more VMs that belong to a network tier that provides load balancing service in a VPC. A user creates a rule, specifies an algorithm, and assigns the rule to a set of VMs within a tier.

Click the Configure button of the VPC, for which you want to configure load balancing rules. The new load balancing rule appears in the list. You can repeat these steps to add more load balancing rules for this IP address. CloudStack supports sharing workload across different tiers within your VPC.

Assume that multiple tiers are set up in your environment, such as Web tier and Application tier. If you want the traffic coming from the Web tier to the Application tier to be balanced, use the internal load balancing feature offered by CloudStack. In this figure, a public LB rule is created for the public IP On the Application tier two internal load balancing rules are created. An internal LB rule for the guest IP Another internal LB rule for the guest IP Description : A short description of the rule that can be displayed to users.

Source Port : The port associated with the source IP. Traffic on this port is load balanced. Choose the load balancing algorithm you want CloudStack to use. CloudStack supports the following well-known algorithms:.

Public Port : The port to which public traffic will be addressed on the IP address you acquired in the previous step. Private Port : The port on which the instance is listening for forwarded public traffic. Protocol : The communication protocol in use between the two ports. Select the name of the instance to which this rule applies, and click Apply. You can remove a tier from a VPC. A removed tier cannot be revoked. When a tier is removed, only the resources of the tier are expunged.

All the network rules port forwarding, load balancing and staticNAT and the IP addresses associated to the tier are removed. In the Network Details tab, click the Delete Network button. You can edit the name and description of a VPC. To do that, select the VPC, then click the Edit button. The network that you can provision without having to deploy any VMs on it is called a persistent network. When you create other types of network, a network is only a database entry until the first VM is created on that network.

With the addition of persistent network, you will have the ability to create a network in CloudStack in which physical devices can be deployed without having to run any VMs. Additionally, you can deploy physical devices on that network.

One of the advantages of having a persistent network is that you can create a VPC with a tier consisting of only physical devices. For example, you might create a VPC for a three-tier application, deploy VMs for Web and Application tier, and use physical machines for the Database tier.

Another use case is that if you are providing services by using physical hardware, you can define the network as persistent and therefore even if all its VMs are destroyed the services will not be discontinued. From the Network Offering drop-down, select the persistent network offering you have just created.

No manual configuration is required to setup these zones because CloudStack will configure them automatically when you add the Palo Alto Networks firewall device to CloudStack as a service provider.

This implementation depends on two zones, one for the public side and one for the private side of the firewall. This implementation supports standard physical interfaces as well as grouped physical interfaces called aggregated interfaces. Both standard interfaces and aggregated interfaces are treated the same, so they can be used interchangeably. Because no broadcast or gateway IPs are in this single IP range, there is no way for the firewall to route the traffic for these IPs.

For the other settings, there are probably additional configurations which will work, but I will just document a common case. When adding networks in CloudStack, select this network offering to use the Palo Alto Networks firewall. In addition to the standard functionality exposed by CloudStack, we have added a couple additional features to this implementation.

This is helpful for keeping track of issues that can arise on the firewall. Administration Guide 4. See a typical guest traffic setup given below: Typically, the Management Server automatically creates a virtual router for each network. Servers are connected as follows: Storage devices are connected to only the network that carries management traffic. Hosts are connected to networks for both management traffic and public traffic. Hosts are also connected to one or more networks carrying guest traffic.

To configure the base guest network: In the left navigation, choose Infrastructure. Click the Network tab. Click Add guest network. The Add guest network window is displayed: Provide the following information: Name : The name of the network. This will be user-visible Display Text : The description of the network.

This will be user-visible Zone : The zone in which you are configuring the guest network. Network offering : If the administrator has configured multiple network offerings, select the one you want to use for this network Guest Gateway : The gateway that the guests should use Guest Netmask : The netmask in use on the subnet the guests will use Click OK.

On Zones, click View More. Click the zone to which you want to add a guest network. Click the Physical Network tab. Click the physical network you want to work with. On the Guest node of the diagram, click Configure. The Add guest network window is displayed. Specify the following: Name : The name of the network.

This will be visible to the user. Domain : Selecting Domain limits the scope of this guest network to the domain you specify. The network will not be available for other domains. If you select Subdomain Access, the guest network is available to all the sub domains within the selected domain. Account : The account for which the guest network is being created for.



0コメント

  • 1000 / 1000