Tag Archives: Linux

A long time ago I became frustrated with having to update my WordPress plugins manually, so I created a Perl script and a blog post (https://blog.vpetkov.net/2011/08/03/script-to-upgrade-plugins-on-wordpress-to-the-latest-version-fully-automatically/) that explained how to automate this. The idea was quite simple: feed a plugin name, have the script check the WordPress plugins page for the latest versioned download, grab it, and extract it over the specified blog plugins directory and thus update the plugin.

The script was simple and it worked very well. It made dealing with plugins many times easier. However, there was one big down side as some users pointed out — it did not actually check if a plugin needed to be updated. It blindly replaced the current plugin with the latest version. This meant that there was no way to “efficiently” automate it. If you cron-ed it directly, it would simply pull and update all your plugins at whatever period you specified. For the longest time this really irritated me, but I didn’t have time to dig through WordPress to understand how the engined checked and signaled for local plugins. One particular user (Joel) forked a copy and made many improvements to deal with this specific issue.

As time went on, I decided to look at this problem again. A couple of years ago I solved it in a really elegant way, but I didn’t have time to update the blog post. A few days ago, after looking at the blog statistics, I realized that the WordPress article was one of the top 10 most popular ones. So, with that said, here is:

A new simple and elegant solution

The idea is to use the WordPress CLI in order to “query” the local plugins database for plugin names, version number, and “activated” status, and then compare the “local” plugin version with the “remote” plugin version. If the plugin is active and in need of an update, fall back to my original Perl script to update it. Aha! And now we have something that can be cron-ed 🙂

To get started, first grab the WP CLI utility. We are going to rename it, move it to an accessible place, and take care of permissions so that we can use it:

Continue Reading →Easy fully automated WordPress plugin update system

If you have not used Swarm, skim the non-service-discovery tutorial to get a feel for how it works:
https://blog.vpetkov.net/2015/12/07/docker-swarm-tutorial-and-examples. It’s very easy, and it should give you an idea of how it works within a couple of minutes.

Using Swarm with pre-generated static tokens is useful, but there are many benefits to using a service discovery backend. For example, you can utilize network overlays and have common “bridges” that span multiple hosts (https://docs.docker.com/engine/userguide/networking/get-started-overlay/). It also provides service registration and discovery for the Docker containers launched into the Swarm. Now lets get into how to use it with service discovery – which is what you would use in a scaled out environment/production.

Again, assuming you have a bunch of servers running docker:
vm01 (10.0.0.101), vm02 (10.0.0.102), vm03 (10.0.0.103), vm04 (10.0.0.104)

Normally, you can do “docker ps” on each host for example:
ssh vm01 ‘docker ps’
ssh vm04 ‘docker ps’

If you enable the API for remote bind on each host you can manage them from a central place:
docker -H tcp://vm01:2375 ps
docker -H tcp://vm04:2375 ps
(note: port is optional for default)

But if you want to use all of these docker engines as a cluster, you need Swarm.
Here we will go one step further and use a common service discovery backend (Consul).

Docker Swarm Tutorial with Consul and How-To/Examples

Continue Reading →Docker Swarm Tutorial with Consul (Service Discovery) and Examples

[ updated 10-30-2016 | Upgraded Plex to plexmediaserver-1.1.4.2757-24ffd60.x86_64.rpm and CentOS ]

Recently I tried setting up a Plex server in a docker container. The first problem was the 127.0.0.1:32400 bind which required logging in locally or port forwarding. After doing this once, I realized that you could use the Preferences.xml file, but that meant that you couldn’t truly automate this/deploy it elegantly in a docker container. And what if you wanted to run other servers — for friends? I finally figured out how to do this in the most elegant way possible.

First – Grab your Unique Plex Access Token

Login at https://app.plex.tv/web/app with your username and password
Open your javascript console (in Chrome: View -> Developer -> JavaScript Console)
and type:
console.log(window.PLEXWEB.myPlexAccessToken);

Note the token, which will look like this: “PZwoXix8vxhQJyrdqAbY”

At this stage DO NOT click log out of your account until you register the new server. Otherwise your token will regenerate.
Once you register the server, it won’t matter after that if the token changes.

Grab my Docker Image

Check out: https://hub.docker.com/r/ventz/plex/
You can pull it down by doing:

Continue Reading →Plex server on a VPS Docker setup without port forwarding

A bit of background and the “old/normal way”

If you use Docker, you very quickly run into a common question: how do you make Docker work across multiple hosts, datacenters, and different clouds. One of the simplest solutions is Docker Swarm. Docker summarizes it best as “a native clustering for Docker…[which] allows you create and access to a pool of Docker hosts using the full suite of Docker tools.”

One of the biggest benefits to using Docker Swarm is that it provides the standard Docker API, which means that all of the existing Docker management tools (and 3rd party products) just work out of the box as they do with a single host. The only difference is that they now scale transparently over multiple hosts.

After reading up on it HERE and HERE, it was evident that this is a pretty simple service, but it wasn’t 100% clear what went where. After searching around the web, I realized that almost all of the tutorials and examples on Docker Swarm involved either docker-machine or very convoluted examples which did not explain what was happening on which component. With that said, here is a very simple Docker Swarm Tutorial with some practical examples.

Assuming you have a bunch of servers running docker:
vm01 (10.0.0.101), vm02 (10.0.0.102), vm03 (10.0.0.103), vm04 (10.0.0.104)

Continue Reading →Docker Swarm Tutorial and Examples

Lately, we have seen some really bad vulnerabilities in regards to SSL (Heartbleed) and Bash (later dubbed “Shellshock”), along with some slightly “lighter” linux/open source ones.

In September of this year, Google first discovered a fallback attack for SSL v3.0, and they wrote published a paper on it: https://www.openssl.org/~bodo/ssl-poodle.pdf.
Today, it was officially confirmed that SSL version 3.0 is no longer secure, and thus, it is no longer recommended in client software (ex: web browsers, mail clients, etc…) or server software (ex: apache, postfix, etc…).
This was dubbed the “POODLE” vulnerability, and given CVE-2014-3566

A “POODLE attack” can be used against any website or browser that still supports SSLv3.
Browsers and websites need to turn off SSLv3 as soon as possible in order to avoid compromising sensitive/private information. Even though a really small percent of servers/browsers are vulnerable (mozilla estimates 0.3% of the internet), that is quite large in the total number of users.

How can I check if my browser is Vulnerable?
The guys at dshield setup this nice browser check: https://sslv3.dshield.org:444/index.html For checking your browser, use: https://www.poodletest.com

Poodletest was first mentioned to me by Curtis Wilcox.
Continue Reading →OpenSSL – SSL 3.0 Poodle Vulnerability

As some of you may have heard, a very serious remote vulnerability was discovered disclosed today within bash.

A quick summary of the problem is that bash does not properly process function definitions, which can be exported like shell variables. This is a bit like a SQL/XSS injection problem — you provide an “end” to your input, and continue writing other functions/calls after it, which then get executed.

A quick example:

A vulnerable system looks like this:
vulnerable!

A patched system looks like this:
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x’

Continue Reading →Bash remote exploit vulnerability

UPDATE: Insecure has released v6.46 which contains all of these patches. Just grab the latest and follow the usage info here

If you don’t know what Heartbleed is, you can find out here: http://heartbleed.com/. If you don’t want to read the details above, XKCD put together a great short comic about it: http://xkcd.com/1354/

NOTE: I first put this together 3 days ago, but I am just now releasing after being asked by many people for the package and directions.

The problem: How do you scan a bit more than 5 class B’s (~328000 IP addresses) before any off the vendors (Tenable, Qualys, Rapid7) have released signatures? Easy – you build your own!
The goal was to scan as many IPs as possible at work as quickly as possible.

After using the Heartbleed github project (https://github.com/FiloSottile/Heartbleed) and creating a Dancer web service around it, I realized that there still needed to be a faster way to scan for this. How much faster?

How about a /24 (254 IP addresses) in less than 10 seconds.

I have a patched version of NMAP already (6.40) that has Heartbleed checks.
Again, Insecure has released v.6.46 which has these patches. Grab that and follow these directions

Then, you can scan like this:

 

If you want cleaner results, for a script, a good way to filter the output will be with something like this:

This produced a clean 2 line result, where if it’s vulnerable, it will have “ssl-heartbleed” under each host/IP address entry.

 

How to build your own patched NMAP binary?

But what if you don’t trust my binary? Good – let me show you how to build one yourself:

Continue Reading →Ridiculously fast Heartbleed Subnet Scanner – nmap heartbleed howto and tutorial

There are many ways to exploit a web server and gain access to the file system – read or write (sometimes both). This becomes even easier when one hosts CGIs or other dynamic code – especially when that code includes user based inputs. Recently, I found one of the most elegant exploits that I have seen for this kind of an attack vector, so I wanted to go over it and share some information about how it works and what exactly it exploits.

To setup the background for this scenario, imagine a web server (ex: ‘www.example.com’) setup with userdirs, which allows CGI execution – not an uncommon situation at all. This means that ‘user1’ will have a directory like ‘public_html’, which will become directly accessible at: ‘http://www.example.com/~user1/’. For example, creating a ‘blah’ folder in ‘/home/user1/public_html’, will create ‘http://www.example.com/~user1/blah’ on the web.

At some point, ‘user1’ creates a file called ‘x.cgi’, which simply has a GET parameter called ‘file’, and if that parameter is a file that exists, it loads it via an include. Otherwise, it loads a default.html file. Let’s assume that ‘x.cgi’ is a PHP file which looks like this:

Continue Reading →Web Exploit – user modifiable Read and Execute can give you Write access

Setting up the network interfaces is something that seems to give people a hard time (clearly visible here: http://docs.openstack.org/grizzly/basic-install/apt/content/basic-install_network.html). If you follow that guide, one of the most confusing points is how the Open vSwitch fits into the existing architecture.

Assuming you are following the guide, you have 2 networks:
10.10.10.0/24 -> private
10.0.0.0/24 -> public

Your Network Controller, again per the guide, will have an internal-network interface of “10.10.10.9” and an external-network interface of “10.0.0.9”

Your starting network config (/etc/network/interfaces) file will look like this:

Now, you will first install the packages needed:

Then you will start the Open vSwitch:

Continue Reading →OpenStack – Network Controller – Open vSwitch – Network Interfaces Config

Recently, while setting up my the network controller for OpenStack, I saw this message:

# tail -f /var/log/quantum/openvswitch-agent.log

ERROR [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Failed to create OVS patch port. Cannot have tunneling enabled on this agent, since this version of OVS does not support tunnels or patch ports. Agent terminated!

What this means is that the versio of the datapath (shipped by Ubuntu) does not have the support needed to create tunnels or patch ports. This happened on Ubuntu 13.04.

Fortunately, it is VERY easy to solve this. You need to simply build your own datapath for your kernel. For this, you OpenvSwitch’s datapath source, and you need module-assistant:

You can then grab your kernel headers and any other dependencies:

I noticed that either the kernel headers do not have the version.h in the right place, or the module-assistant looks in the wrong place. You can solve this by doing:

And finally, to download, build, and install the modulle:

Now, reboot your system so that the new module is loaded, and you are ready to go. You will notice that “/var/log/quantum/openvswitch-agent.log” no longer has this issue.