Azure Site Recovery (ASR) limitations which are difficult to bypass


Seen issues with the limitation of Azure Site Recovery and tried multiple ways to get it passed, later it became a road block to use ASR for migration of the large Oracle database servers. Here are the two major limitations of ASR which I have faced.
  1. ASR doesn’t support clustered disks.
  2. ASR doesn’t support GPT/UEFI disk (In OS Disks)
Now most of the enterprise will have clustered disks, so the workaround for the above problem is to disable cluster services, and create stand-alone VM’s. And consider standalone servers for the migration, which may not work for many lift and shift type migration strategies of large databases and other workloads.
Also the conversion of GPT to MBR disk will also not work here because these disks are mostly OS disks and there is no supported way to convert them to MBR partition.
We thought for a workaround to create a VHDX using disk2VHD (GPT disk) and then create a Gen2 VM on Hyper-V, thinking that ASR will work in this case, however we have found that 2008 R2 isn’t supported as a Generation2 VM, so we not able to proceed further on this unless we upgrade the OS which is no go from the application owners.
So if you are planning to move physical boxes or VM’s which have GPT partitions in the C drive OR have the clustered disks, it’s may be better to look out for some different tool for the migration instead of using ASR at this point of time.
When we look out of this with the 3rd party I came across with a vendor called doubletake and  when we have checked their cloud migration user guide and we do not see comments specific to UEFI partition based Windows machines.
Here is the user guide for doubletake your reference.
It says it does not support UEFI disks on Linux machines. Nothing on Windows ðŸ™‚
For more details on Azure Site Recovery Support Matrix, please check this link
Bottom line is that ASR may be a very good tool but has few limitations and if you are planning for large scale migration with different workloads, you need to plan for your large workload sprint in advance and need to decide the strategy as a case to case basis.

SCOM 2016 Server integration with Azure OMS (Operations Management Suite)

Dear friends if you already have an on premise SCOM infrastructure it’s good idea to leverage that infrastructure and connect with the Azure log analytics. Azure OMS log analytics gives extended capabilities to manage your on premise infrastructure. This allows you leverage the opportunities of OMS while continuing to use Operations Manager.
If you are an old SCOM admin you can still use your existing SCOM server to monitor the workloads and integration with OMS will really help because by using the speed and efficiency of OMS in collecting, storing, and analyzing data from Operations Manager. OMS helps correlate and work towards identifying the faults of problems and surfacing recurrences in support of your existing problem management process.
OMS has very rich dashboard and reporting capabilities which complement the SCOM Server.
A standard architecture is as follows:
Fig: OMS integration with SCOM
Now before we plan our deployment we should note the system requirement
  • OMS only supports Operations Manager 2016, Operations Manager 2012 SP1 UR6 and greater, and Operations Manager 2012 R2 UR2 and greater. Proxy support was added in Operations Manager 2012 SP1 UR7 and Operations Manager 2012 R2 UR3.
  • All Operations Manager agents must meet minimum support requirements. Ensure that agents are at the minimum update, otherwise Windows agent traffic may fail and many errors might fill the Operations Manager event log.
  • An OMS subscription.
Network requirement
Below is the network requirement for OMS connectivity with the on premise SCOM server
ResourcePort numberBypass HTTP Inspection
Agent
*.ods.opinsights.azure.com443Yes
*.oms.opinsights.azure.com443Yes
*.blob.core.windows.net443Yes
*.azure-automation.net443Yes
Management server
*.service.opinsights.azure.com443
*.blob.core.windows.net443Yes
*.ods.opinsights.azure.com443Yes
*.azure-automation.net443Yes
Operations Manager console to OMS
service.systemcenteradvisor.com443
*.service.opinsights.azure.com443
*.live.com80 and 443
*.microsoft.com80 and 443
*.microsoftonline.com80 and 443
*.mms.microsoft.com80 and 443
login.windows.net80 and 443
Today we will see how we can create a Log Analytics Account in Azure and proceed further with this integration.
To start with please go to Azure Portal and  search for the log analytics icon as shown below
In the next step you will find the log analytics dashboard
Once you will click on the create log Analytics Button you can see the following screen.
In the next step please fill the required information.
For pricing information related to Azure OMS please click on the following article.
Once you click on the ok button you will find the following
After the successful deployment of the workspace you should be able to see the following screen.
The free tier has the following pricing information.
It has 500 MB of Daily limit and data retention of 7 days however I am not sure how much it will charge per node. We need to verify with billing team.
You can click on the OMS Portal Icon to directly go the OMS portal, as you can see below
To know more about how to configure alerts in OMS you can read my old post here
Once you click the OMS portal it will show the following screen
Since the OMS workspace is ready our next step will be to connect OMS with the SCOM server. We have the SCOM 2016 server deployed in our environment. We can work with that server and configure the connectivity with the whyazure workspace which we have just created.
Connecting Operations Manager to OMS
Perform the following series of steps to configure your Operations Manager management group to connect to one of your OMS workspaces.
  1. In the Operations Manager console, select the Administration workspace.
  2. Expand the Operations Management Suite node and click Connection.
  3. Click on Register Operations Management Suite Link as shown below
  1. In the next step login to OMS portal to register with your Azure Active Directory Credential as shown below. This is a wizard which will to take you to next steps.
Now this was a problem for me while installing since I was using MFA (Multi Factor Authentication) the authentication process required that JavaScript and Cookies are enabled in the IE. So I have to make some changes in the IE settings so that I will get the phone call in my mobile for the MFA. Once I have passed the MFA, I can go to the next step.
In the next step select the OMS Workspace as shown below
Click on Next and you will see the following screen
Now click on Create button and you will see the following screen
Now you can click on close and the OMS will be connected to SCOM.
So we are done with connection between the SCOM server and OMS and everything looks good so far.
Now to cross verify if your SCOM data source has been configured with OMS, you can log into the OMS portal and it will show the following.
If you go to the settings it will show the following screen
In the System Center tab it is showing SCOM management server name.
You can also cross verify by the following way from SCOM Monitoring console
From the Monitoring view, navigate to the Operations Management Suite\Health State view. Select a Management server under the Management Server State pane, and in the Detail View pane confirm the value for property Authentication service URI matches the OMS Workspace ID.
Now we need to add few computers however there is a strange issue, the search button is missing. After spending lots of time in debugging this problem I found an article which mentioned it’s a product bug and Microsoft is investigating the issue.
Here is the article which describes the issue.
Some customers have reported that the Search button in the Computer Search dialog box is missing. We are currently investigating this. As a temporary workaround, click in the Filter by (optional) edit box, and then press the Tab key to get to the invisible search button. Then, you can activate the button by pressing the or key.
Once I follow the above technique I can see all the computers which are currently I am monitoring with SCOM.
I have selected few of them which I need to monitor and click on the Add Button and it will show the list
In the manage computers page it’s showing all of them which I have selected
In the OMS console also I can see the on premise computers are showing. As here you can see below
If you click on the 2 ON-PREMISE computers you can see the following screen
We can also define a period for the log search as shown below
If you export the data to excel it will show a table similar to this
SourceSystemTimeGeneratedMGManagementGroupNameSourceComputerIdCategoryComputerOSTypeOSMajorVersionOSMinorVersionVersionSCAgentChannelIsGatewayInstalledComputerIPRemoteIPLongitudeRemoteIPLatitudeRemoteIPCountryComputerEnvironmentidType
OpsManager2017-09-02T14:39:24.56Z605a8ae6-c9be-4d5d-b771-af61c95d61b0SCOM_20164d1dd458-4c07-005d-f356-c62283291a8eSCOM AgentWAI-SQL01.whyazure.inWindows
10
0
8.0.10918.0Direct
FALSE
106.51.58.228
77.64
12.91
IndiaNon-Azure109e49ce-b85a-e743-0d38-ca0feace2ebcHeartbeat
OpsManager2017-09-02T14:39:24.007Z605a8ae6-c9be-4d5d-b771-af61c95d61b0SCOM_2016f1ceb243-c787-cccc-376b-de24d62b6219SCOM AgentWAI-SQL02.whyazure.inWindows
10
0
8.0.10918.0Direct
FALSE
106.51.58.228
77.64
12.91
IndiaNon-Azureabb55c62-e2d5-5576-fb3e-cc3faa34969eHeartbeat
Now all the on premise computers are present in OMS you can configure any alerts for them referring my article about them by clicking this link.
That’s all for today, I will bring more articles on hybrid infrastructure monitoring with OMS and SCOM. Stay tuned till then.

Dos and don'ts: 9 effective best practices for DevOps in the cloud

DevOps and cloud computing are joined at the hip. Why? DevOps is about streamlining development so user requirements can quickly make it into application production, while the cloud offers automated provisioning and scaling to accommodate application changes.
Unfortunately, IT professionals who practice DevOps in the cloudoften make mistakes they easily could have avoided. The problem is that best practices are not yet well understood. Both areas are relatively new, but this issue may have more to do with people than technology—and people problems are often harder to solve.
To help you successfully get off the ground, I've put together a list of dos and don'ts to follow when implementing or operating DevOps in the cloud.

Do splurge on training for both DevOps and cloud computing

Most people who implement DevOps in the cloud are fighting a cultural battle as well as a technological one. Hearts and minds need to change along with the technology.
Training leads to understanding, which leads to acceptance. The key players within the organization need to participate in cloud and DevOps training, and you may need to offer some mentoring as well. You can tell everyone that this is something that they must do, or you can actually show them the way.

Don't forget about security

Security models change in the cloud, where you'll typically employ identity-based security models and technologies. But you need to extend security to the DevOps tools and organization as well.
Security should be part of the automated testing and should be built into continuous integration and continuous deployment processes as those move to a cloud-based platform. If you can afford it, hire or appoint a chief security officer whose sole job is to monitor security within DevOps in the cloud.

Do select DevOps tools that work with more than one cloud

DevOps tools exist on demand, on premise, or as part of a larger public cloud platform. When selecting tools, many people follow the path of least resistance, which involves using a public cloud provider as much as possible to provide the DevOps tools. Typically, those tools are tightly integrated with the application deployment platform.
However, it's not a good idea to lock yourself into a single cloud platform. Applications should be deployable on many different clouds. In this way you can pick and choose the best public or private cloud for the job. You don't want to limit your choices at this point.

Don't forget about service and resource governance

Governance is often overlooked on both the DevOps and cloud sides of the equation—that is, until the number of services and resources reaches a tipping point. This usually happens when the number of services, APIs, and resources such as storage and compute grows to the point when they become way too cumbersome to manage. That number depends on the types of services and resources under management, but you'll likely hit it during your first year of operations with DevOps in the cloud.
In order to prepare for the management of services and resources, you need to build a governance infrastructure well before you need it. These tools vary greatly in features and function but most provide a services and resource directory that helps track, secure, and manage services and resources. These tools typically provide a place to create policies that govern how the services may be leveraged, such as times that they can be accessed, data that can be accessed, and so forth.

Do include automated performance testing

In the cloud, application performance issues are often a function of application design. Many of these performance issues aren't caught before they go into production and users end up finding and reporting them, which isn't good.
Performance testing should be a large part of the automated testing in your DevOps stream. First, you must prevent poor-performing applications from making it into production. Second, public cloud providers may attempt to account for performance issues by automatically adding more resources. If that happens, you could find a large cloud computing bill at the end of the month.
Automated performance testing should enable the application to provide good performance as well as efficient use of resources. These tests should mesh with existing stability and accuracy testing, as well as with existing testing for the APIs and user interfaces.

Don't underfund DevOps in cloud transformation

Something I hear within enterprises is that since DevOps and cloud will save the enterprise money, those savings should fund the transformation. This kind of zero-sum budgeting would seem to make the impact on the yearly IT budget easy to manage. Use this method, however, and you won't have enough money to get your DevOps and cloud projects off the ground—and that means you'll fail.
The reality is that for DevOps in the cloud to provide you with the projected cost savings, you'll have to invest heavily up front for at least the first two years. While your normal operations are ongoing, the DevOps and cloud projects must function independently for a time. This allows for DevOps in the cloud approaches and technologies to prove their worth and for the staff to understand them before you phase them into production.

Do consider containers

Containers provide a way to "componentize" applications so they're portable and easily managed and orchestrated. Integrate containers into your DevOps and cloud strategy.
Spend some time with the technology to evaluate what works and what doesn't as you target your use of those technologies. Also, be sure to consider security, governance, cluster management, and orchestration tools as part of your platform that leverages containers.
That's not to say that containers will always be a fit for the way you build and deploy applications. It does mean that you should consider the value of this application architecture approach, its standards, and enabling technology so you don't miss any value the technology can provide.

Don't force every application to the cloud

When migrating existing applications to the cloud, enterprises typically have hundreds or thousands of applications to consider for migration. Follow these rules when making your selections:
  • It's impractical to relocate many applications to the cloud because they're based on traditional technology. A good example would be old COBOL systems.
  • The cost to move many applications can't be cost-justified due to the extensive changes that would be required to host them in the cloud. This is the case with both older and newer applications.
  • Place applications in priority order, starting with those that would provide the most value to the business if migrated.
  • Analyze applications to determine the amount of work needed to meet requirements, from a direct port ("lift-and-shift") to complete refactoring.

Do consider making your applications cloud native

To take complete advantage of a cloud platform, including infrastructure as a service and platform as a service, you must design applications in such a way that they're decoupled from physical resources. Of course, the cloud can provide an abstraction or virtualization layer between the application and the underlying physical or virtual resources, whether they're designed for cloud or not.
But that's not good enough.
When you consider decoupled architecture in the design, understand that the efficiency of the development and deployment stages of an application, as well as the utilization of the underlying cloud resources, can improve by as much as 70 percent. Cloud computing efficiency saves money. You're paying for the resources you use, so applications that work more efficiently with those resources run faster and generate smaller cloud services bills at the end of the month.
So, how much extra work is required to get the benefits of being cloud native? Is it worth it?

Dos and don'ts

As organizations have gotten better at DevOps in cloud computing, best practices have begun to emerge. As with the use of most emerging technologies, you can find guidance, but not hard and fast rules you can use to determine how your organization should use the technology effectively. So plan on learning as you go, and expect to make mistakes.
That said, you can reap huge benefits from leveraging DevOps in conjunction with cloud-based platforms. This potent combination can enhance agility and time to market, as well as greatly reduce operating costs.
The benefits that will accrue from using DevOps in the cloud aren't automatic, and they do require a great deal of brainpower and up-front investment to attain your objectives. But if you understand the level of commitment required and give DevOps in the cloud high priority in your organization, you'll do just fine.

DevOps dictates new approach to cloud development

DevOps dictates new approach to cloud development

It's a fact: DevOps and cloud are joined at the hip. The overwhelming majority of cloud development projects employ DevOps, and the list will only get longer. The benefits to using DevOps with cloud projects are also becoming better defined. They include application development speed-to-delivery to meet the needs of the business units faster, user demands that quickly fold back into the software, and lower costs for development, testing, deployment, and operations.
In this article, we define how cloud development is changing, why it's changing, and, most importantly, how you as a software engineer can adapt to the change. We'll focus on how DevOps changes the game for development as a whole and cloud development specifically.
Testing in the Agile Era: Top Tools and Processes

How the game is changing

At its core, DevOps is the automation of agile methodology. The idea is to empower developers to respond to the needs of the business in near real-time. In other words, DevOps should remove much of the latency that has existed for years around software development.
DevOps' links with cloud computing are easy to define:
  • The centralized nature of cloud computing provides DevOps automation with a standard and centralized platform for testing, deployment, and production. In the past, the distributed nature of some enterprise systems didn't fit well with centralized software deployment. Using a cloud platform solves many issues with distributed complexity.
  • DevOps automation is becoming cloud-centric. Most public and private cloud computing providers support DevOps systemically on their platform, including continuous integration and continuous development tools. This tight integration lowers the cost associated with on-premises DevOps automation technology, and provides centralized governance and control for a sound DevOps process. Many developers who enter into the process find that governance keeps them out of trouble, and it's easier to control this centrally via the cloud versus attempting to bring departments under control.
  • Cloud-based DevOps lessens the need to account for resources leveraged. Clouds leverage usage-based accounting, which tracks the use of resources by application, developer, user, data, etc. Traditional systems typically don't provide this service. When leveraging cloud-based resources, it's much easier to track costs of development resources and make adjustment as needed.
What's most interesting is that the cloud isn't really driving DevOps; rather, DevOps is driving the interest and the growth of cloud. In RightScale's 2015 State of the Cloud Report, they found that "Overall DevOps adoption rises to 66 percent, with enterprises reaching 71 percent." Their conclusion is that DevOps is wagging the cloud computing dog, not the other way around.

Why DevOps is leading teams to the cloud

What drives the use of DevOps as a leading enabling technology to get to the cloud? It's the need to simplify and speed up a development process that has stifled growth for many enterprises. Stories abound about titans of industry who are unable to purchase companies or marketing leaders who are unable to launch products, all because IT can't keep up with the application development backlog.
While enterprise leaders look to fix their application development processes by moving from waterfall to DevOps, they also understand that DevOps alone won't save them. The latency in making capital purchases of hardware and software slows the development process, even if it's made agile. Developers end up waiting around for capital resources to be put in place before the applications can be deployed.
Thus, DevOps won't have much value without the cloud, and the cloud won't have much value without DevOps. This fact alone is being understood inside enterprises that once thought they could move to one or the other, and that no dependency existed. We're finding that dependencies between DevOps and cloud do indeed exist.

Approaching cloud app development

When building applications in the cloud, the change needs to start at the software engineering level, not at the C-level. The advantages of building cloud applications using modern DevOps tools should be understood by all who will drive the process. Those who aren't on board will likely get in the way of progress and not respond correctly to the inevitable problems that will arise. (We can call that process "continuous correction.")
While enterprise development shops are quick to pick a cloud platform, often before they establish a DevOps process and DevOps organization, the reality is that DevOps and public and private cloud solutions should evolve at the same time. We must automate our agile processes using cloud and non-cloud DevOps automation tools. At the same time, we must consider how to extend those DevOps processes and automation into public and/or private clouds.
This is easier said than done, considering the newness of DevOps tools and DevOps cloud services. It's not something that you can do in serial order, given the deep dependencies discussed earlier. The process that seems to work best includes the following steps.
1. Define your development requirements. Take a quick look at what you're doing now and what you need to do in the future.
2. Define the business case. You'll have to ask somebody for money, thus the need to define the ROI.
3. Define the initial DevOps processes. Keep in mind, these processes will continually change as we improve them through review, trial, and many errors.
4. Define the initial DevOps solution and links to the cloud platform or platforms. You can't just define DevOps tools without understanding the target platform or platforms. There must be synergy with DevOps processes, automation, culture, and target platform. You need to determine the "whats" and the "hows." This is where most enterprises stumble because of the complexity of all the new moving parts. They miss the mark, in terms of lost opportunities within the new cloud platforms that go unexploited for one reason or another.
5. Consider your people. You need everyone to be on board with DevOps and with having DevOps drive cloud development. This seems to be an issue in many organizations, simply because DevOps and cloud are both new. Adopting both new paths at the same time seems to blow the minds of traditional developers who want to learn but need a great deal of guidance. Training won't save you here, either. It's leadership that needs to come from the developers, and there should be no question about the new processes, tools, platforms, and day-to-day practices.
6. Define CloudOps—how applications will operate in the cloud.Most developers don't want anything to do with operations. Within this new model, that can't be the case. The old model of tossing code over a wall and hoping for the best is over. DevOps and cloud should give developers new, improved visibility into how their applications operate. This feedback can be used to improve the cloud application.

DevOps will lead the way

As DevOps and cloud continue to prove their collective value for enterprises, more CTOs and technology leads will be working to remove the technical and bureaucratic hurdles that stifle growth and opportunities for businesses. However, these same enterprises need to go much further with the larger value of DevOps, which includes continuous and agile deployment. This concept is less understood, and it's even feared by many in enterprise IT, who view it as a path to lower productivity and application quality. But when you add the cloud to the DevOps equation, you see that enterprises no longer have a choice.
If cloud computing is to become effective for enterprises, then it's DevOps that must take us there. The value and the function of DevOps, and the value and the function of cloud computing, are completely synergistic. You won't get the value of one without doing both. On the ops side, many businesses lack the organization and tools to make DevOps work. Various new approaches and their related technologies aren't well understood, and traditional approaches continue to be safe harbors that deliver low value to the business. The changes described here represent an interesting path for these organizations to take.
The strategic nature of DevOps and cloud is unlike other development approaches and platform changes that have come along in recent years. They may find themselves holding stone chisels in a digital world.
The largest hindrance to making the leap is the number of changes that must occur at the same time. DevOps needs to be understood and implemented. The cloud needs to be adopted around DevOps, and thus many of the decisions around DevOps tools and cloud platforms need to occur together.

Get past the changes

At the same time, the culture of the enterprise and developers must change around the notion of DevOps, and the way it's needed to drive cloud development going forward. Finally, enterprise IT must spend the mother of all budgets to get through these changes. And they must spend money without an immediate goal around ROI, which of course drives corporate leaders and shareholders nuts.
However, the alternative—doing nothing—means certain failure. Others are likely to out-maneuver you, in terms of time-to-market with solutions and application services. When you can stand up applications and processes in near real-time and do so within an elastic and efficient environment, the market will reward you for making that effort. If you haven't embarked on the process yet, it's time to get started.

PHP web application on a LAMP Stack

This tutorial walks you through the creation of an Ubuntu Linux virtual server with Apache web server, MySQL database and PHP scripting. This combination of software - more commonly called a LAMP stack - is very popular and often used to deliver websites and web applications. Using IBM® Cloud Virtual Servers you will quickly deploy your LAMP stack with built-in monitoring and vulnerability scanning. To see the LAMP server in action, you will install and configure the free and open source WordPress content management system.

Objectives

  • Provision a LAMP server in minutes
  • Apply the latest Apache, MySQL and PHP version
  • Host a website or blog by installing and configuring WordPress
  • Utilize monitoring to detect outages and slow performance
  • Assess vulnerabilities and protect from unwanted traffic

Services used

This tutorial uses the following runtimes and services:
This tutorial may incur costs. Use the Pricing Calculator to generate a cost estimate based on your projected usage.

Architecture

Architecture diagram
  1. End user accesses the LAMP server and applications using a web browser

Before you begin

  1. Contact your infrastructure administrator to get the following permissions.
    • Network permission required to complete the Public and Private Network Uplink

Configure the SoftLayer VPN

  1. Ensure your VPN Access is enabled and configured for SSL.
    You should be a Master User to enable VPN access or contact your master user for access.
  2. Obtain your VPN Access credentials in your profile page.
  3. Log in to the VPN through the web interface or preferably use your local workstation with a VPN client for Linux,macOS or Windows.
    For the VPN client use the FQDN of a single data center VPN access point from the VPN web access page, of the form vpn.xxxnn.softlayer.com as the Gateway address.

Create services

In this section, you will provision a public virtual server with a fixed configuration. Virtual Servers can be deployed in a matter of minutes from virtual server images in specific geographic locations. Virtual servers often address peaks in demand after which they can be suspended or powered down so that the cloud environment perfectly fits your infrastructure needs.
  1. In your browser, access the Virtual Servers catalog page.
  2. Select Public Virtual Server and click Create.
  3. Under Image, select LAMP latest version under Ubuntu. Even though this comes pre-installed with Apache, MySQL and PHP, you'll re-install PHP and MySQL with the latest version.
  4. Under Network Interface select the Public and Private Network Uplink option.
  5. Review the other configuration options and click Provision to create your virtual server.Configure virtual server
After the server is created, you'll see the server login credentials. Although you can connect through SSH using the server public IP address, it is recommended to access the server through the Private Network and to disable SSH access on the public network.
  1. Follow these steps to secure the virtual machine and to disable SSH access on the public network.
  2. Using your username, password and private IP address, connect to the server with SSH.
    sudo ssh root@
    You can find the server's private IP address and password in the dashboard.
    Virtual server created

Re-install Apache, MySQL, and PHP

It's advised to update the LAMP stack with the latest security patches and bug fixes periodically. In this section, you'll run commands to update Ubuntu package sources and re-install Apache, MySQL and PHP with latest version. Note the caret (^) at the end of the command.
sudo apt update && sudo apt install lamp-server^
An alternative option is to upgrade all packages with sudo apt-get update && sudo apt-get dist-upgrade.

Verify the installation and configuration

In this section, you'll verify that Apache, MySQL and PHP are up to date and running on the Ubuntu image. You'll also implement the recommended security settings for MySQL.
  1. Verify Ubuntu by opening the public IP address in the browser. You should see the Ubuntu welcome page.Verify Ubuntu
  2. Verify port 80 is available for web traffic by running the following command.
    sudo netstat -ntlp | grep LISTEN
    Verify Port
  3. Review the Apache, MySQL and PHP versions installed by using the following commands.
    apache2 -v
    mysql -V
    php -v
  4. Run the following script to secure the MySQL database.
    mysql_secure_installation
  5. Enter the MySQL root password and configure the security settings for your environment. When you're done, exit the mysql prompt by typing \q.
    mysql -u root -p
    The MySQL default user name and password is root and root.
  6. Additionally you can quickly create a PHP info page with the following command.
    sudo sh -c 'echo "" > /var/www/html/info.php'
  7. View the PHP info page you created: open a browser and go to http://{YourPublicIPAddress}/info.php. Substitute the public IP address of your virtual server. It will look similar to the following image.
PHP info

Install and configure WordPress

Experience your LAMP stack by installing an application. The following steps install the open source WordPress platform, which is often used to create websites and blogs. For more information and settings for production installation, see the WordPress documentation.
  1. Run the following command to install WordPress.
    sudo apt install wordpress
  2. Configure WordPress to use MySQL and PHP. Run the following command to open a text editor and create the file /etc/wordpress/config-localhost.php.
    sudo sensible-editor /etc/wordpress/config-localhost.php
  3. Copy the following lines to the file substituting yourPassword with your MySQL database password and leaving the other values unchanged. Save and exit the file using Ctrl+X.
    define('DB_NAME', 'wordpress'); define('DB_USER', 'wordpress'); define('DB_PASSWORD', 'yourPassword'); define('DB_HOST', 'localhost'); define('WP_CONTENT_DIR', '/usr/share/wordpress/wp-content'); ?>
  4. In a working directory, create a text file wordpress.sql to configure the WordPress database.
    sudo sensible-editor wordpress.sql
  5. Add the following commands substituting your database password for yourPassword and leaving the other values unchanged. Then save the file.
    CREATE DATABASE wordpress; GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER ON wordpress.* TO wordpress@localhost IDENTIFIED BY 'yourPassword'; FLUSH PRIVILEGES;
  6. Run the following command to create the database.
    cat wordpress.sql | sudo mysql --defaults-extra-file=/etc/mysql/debian.cnf
  7. After the command completes, delete the file wordpress.sql. Move the WordPress installation to the web server document root.
    sudo ln -s /usr/share/wordpress /var/www/html/wordpress sudo mv /etc/wordpress/config-localhost.php /etc/wordpress/config-default.php
  8. Complete the WordPress setup and publish on the platform. Open a browser and go to http://{yourVMPublicIPAddress}/wordpress. Substitute the public IP address of your VM. It should look similar to the following image.WordPress site running

Configure domain

To use an existing domain name with your LAMP server, update the A record to point to the virtual server's public IP address. You can view the server's public IP address from the dashboard.

Server monitoring and usage

To ensure server availability and the best user experience, monitoring should be enabled on every production server. In this section, you'll explore the options that are available to monitor your virtual server and understand the usage of the server at any given time.

Server monitoring

Two basic monitoring types are available: SERVICE PING and SLOW PING.
  • SERVICE PING checks that server response time is equal to 1 second or less
  • SLOW PING checks that server response time is equal to 5 seconds or less
Since SERVICE PING is added by default, add SLOW PING monitoring with the following steps.
  1. From the dashboard, select your server from the list of devices and then click the Monitoring tab.Slow Ping Monitoring
  2. Click Manage Monitors.
  3. Add the SLOW PING monitoring option and click Add Monitor. Select your public IP address for the IP address.Add Slow Ping Monitoring
    Note: Duplicate monitors with the same configurations are not allowed. Only one monitor per configuration can be created.
If a response is not received in the allotted time frame, an alert is sent to the email address on the IBM Cloud account.Two Monitoring

Server usage

Select the Usage tab to understand the current server's memory and CPU usage.Server Usage

Server security

IBM® Cloud Virtual Servers provide several security options such as vulnerability scanning and add-on firewalls.

Vulnerability scanner

The vulnerability scanner scans the server for any vulnerabilities related to the server. To run a vulnerability scan on the server follow the steps below.
  1. From the dashboard, select your server and then click the Security tab.
  2. Click Scan to start the scan.
  3. After the scan completes, click Scan Complete to view the scan report.Two Monitoring
  4. Review any reported vulnerabilities.Two Monitoring

Firewalls

Another way to secure the server is by adding a firewall. Firewalls provide an essential security layer: preventing unwanted traffic from hitting your servers, reducing the likelihood of an attack and allowing your server resources to be dedicated for their intended use. Firewall options are provisioned on demand without service interruptions.
Firewalls are available as an add-on feature for all servers on the Infrastructure public network. As part of the ordering process, you can select device-specific hardware or a software firewall to provide protection. Alternatively, you can deploy dedicated firewall appliances to the environment and deploy the virtual server to a protected VLAN. For more information, see Firewalls.