/ AWS, EC2, LINUX, DEBUGGING

How I lost, and regained, ssh access to my AWS EC2

This is a long story so I will leave out extra details to keep it (somewhat) brief.

I was working with an AWS EC2 instance with wordpress installed. When you host your own wordpress sites, you need to keep up with server maintenance. I needed to update the PHP version for security reasons and I found a tutorial that told me to update ubuntu first. Once the update was complete, I no longer had ssh access to my EC2.

After doing some reading, I believe this happened because there is some kind of a breaking change in the ssh config between ubuntu versions 16.04 and 18.04.

For most users who have the physical device in front of them it is not difficult to run a few commands on their machine and repair the config issue. For me, however, my only access at the time was through ssh. I could not fix ssh and access the files on my machine until I had a command line or c-panel to work with.

After a lot of trial an error I eventually regained access. I will share the process here.

The things that didn’t work

First I’ll list what I tried that didn’t work. To be fair, though, it’s possible that some of these could work for someone else, as there is plenty of documentation for these methods.

  • checked security groups to ensure ssh access still available on port 22 (it was)
  • added new security group with new port (1022) and try to ssh using it
  • created a new AMI based on current EC2 instance, and launched an entirely new instance from it - still no ssh acces on new one
  • launched a new instance from an old AMI that was created before I updated ubuntu
  • stopped the current instance, detached the EBS volume, reattached it, and restarted the instance
  • added user data script to restart ssh (method 4 in this article https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-resolve-ssh-connection-errors/)
  • ran the troubleshoot ssh automation document (method 3 in this article https://aws.amazon.com/premiumsupport/knowledge-center/ec2-linux-resolve-ssh-connection-errors/), hoping to find some clues as to the root of the problem
  • tried to set up EC2 connect - only works on nitro-based instances, and I wasn’t able to update my instance to a nitro-based system

I was about ready to give up, but there was one more method I had not tried - setting up session manager

The session manager way

While attempting all of the above processes I came across this option. There were a lot of steps in the documentation I read at that time and it seemed a bit intimidating to set up.

I revisited it again while on the “connect” page in EC2. On this page there are tabs with several options to connect to your instance. Connecting via session manager was one of these options.

I dismissed it earlier because it said I couldn’t connect. This time I paid attention to the possible reasons why the connection wasn’t being made. One possibility was that the proper IAM policy wasn’t attached, which could be added using “quick set-up” in systems manager.

So I gave it a try and followed along in the quick set-up console and chose an EC2 related configuration. When I returned to the EC2 connection session manager tab the warnings were gone.

This time when I clicked “connect”, I got command line access to the EC2!

Even when I ran the “pwd” linux command I was a bit lost in the file system. Signing in via session manager didn’t take me directly into the wordpress folders like it did on ssh login. Since now I could run commands I decided to troubleshoot ssh access once again.

I tried a command to test ssh sudo sshd -t which unfortunatly didn’t provide anything useful for me.

The information that I believe helped me the most was from this article on the bitnami community page, which addressed essentially the same problem I was having, just on AWS Lightsail. As Lightsail is built on EC2, I thought this might help.

It suggested, as I suspected from earlier research, that the ssh config was the issue. They said to run these commands:

  sudo sed -i 's/^Ciphers .*/Ciphers +aes256-cbc,aes192-cbc,aes128-cbc/' /etc/ssh/sshd_config
  sudo service sshd stop
  sudo service sshd start

I can’t say for sure these were the exact steps that solved my problem, since I ran some other commands around this time in session manager. But after running the above commands I tried to ssh into my instance again, and it worked. Mostly likely it was this attempt that fixed it since it’s the only one that addresses the config problem.

Conclusion

This may be the most frustrating problem I’ve had to deal with yet in my dev journey. I’ve been stuck before many times but this was the closest I’ve been to running out of ideas and abandoning the project. Thankfully I was able to push through and that didn’t have to happen.

The biggest benefit to working through this problem was that it forced me to dive deep into EC2. It helped to solidify my knowledge on EC2 and expose me to other AWS services. In addition, I learned more about linux systems, which was quite interesting for me since I don’t work with them often.

If nothing else, this was a great learning opportunity. And I’m now very happy to move on.