Monday, December 29, 2014

Solving 'Plugin org.apache.maven.plugin could not be found', or 'Could not find artifact in artifactory-release'

We recently purchased an Artifactory 3.4 pro license where I work, and I've been setting it up to work with Jenkins.

Unfortunately, Jenkins was failing when it was trying to download an Apache Maven dependency with a virtual repository I'd created.

Either I'd forgotten a step or it'd been already setup when I'd been playing with the evaluation version, but I'd failed to link a huge online maven 2 repository to my virtual repo.

[NOTE: I was also trying to update the .m2/settings.xml file and that wasn't working either in Jenkins. ]

If you're getting messages in your Jenkins console output like this :

[ERROR] Plugin [...] or one of its dependencies could not be resolved: Failed to read artifact descriptor for [...]: Could not find [...] in artifactory-release

and have setup your Jenkins job (AKA job) to work with Artifactory, this might be the issue.

Once I had my virtual repository serving artifacts from my local repository, I failed with adding a remote repository - and one of the most crucial : maven2.

In my case  org.apache.maven.plugins:maven-clean-plugin:2.5, wasn't found.

I added a new remote repository pointing to :

Then I went back to my virtual repo and added the Maven repo under Edit Virtual Repository > Basic Settings > Repositories.

I launched the build again and the error went away.

Friday, December 19, 2014

Running nmap scans to verify services aren't disrupted (e.g. elasticsearch)

I've worked in the software security space for about 5 years now, both in identity management and now SIEM/log analytics.

One useful UNIX command I never had experience with - until now - was nmap.

A big problem that we've encountered has been some services that work with elasticsearch were easily disrupted by external nmap scans.

To remedy this we had to reduce the number of externally accessible ports and also use nginx as a reverse proxy that would make people log into our web interfaces with a username and password.


There seem to be non-intrusive and more intrusive versions of nmap, to test what ports are open on a remote server and also more aggressive scanning and faster execution, respectively.

Some sample comands:

nmap -p 1-65535 [IP of server]
nmap -p [port range],[another individual port if needed] -T4 -A -v [IP of server]

These commands were definitely helpful when trying to verify the lock down of our ports; especially with some services like elasticsearch and cassandra. Additionally putting nginx in front of web browser services (e.g. elasticsearch HQ) that helped out even more.

nmap is certainly a nice tool for testing port lockdown.

Thursday, November 20, 2014


I posted earlier in the year about SSH reverse tunneling.

At my new job, I haven't had as many issues in that department but I do re-image servers quite a bit and then need to SSH back into the machine and get those pesky errors informing me there might be a man-in-the-middle attack. Well, this is an internal firewalled server, so that's extremely unlikely.

When this happened I started off by runing ssh-keygen -R[hostname|IP address]

and that worked well, but if you by chance have more than one known_hosts file under your .ssh it might not work.

When that became a little more tedious I tried just deleting the known_hosts* file(s) under .ssh. That worked too, but it's a little too much for the task at hand. Kinda like taking a sledgehammer to a small problem.

I ultimately decided that I wanted something a little less severe that would tackle the short-term problem. The best solution is to pass command line arguments when you SSH.

ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@[hostname|IP address]

I found a few sites that mentioned this but, it was Linux Commando who provided good detail. Thanks.

Since I was also doing SSH scripting with Perl, I though I would add a step to my script to wait for the SSH daemon to start after a re-image/reboot.

Here's the code, and it was a little more difficult to get to work since I don't work with Perl as regularly (although more recently) and the Net::SSH::Perl documentation could offer a few more examples of how to setup the options.

Here's the code :

                        my %params;
                        $params{"strict_host_key_checking"} = "no";
                        $ssh = Net::SSH::Perl->new($passedHost, %params, options => ["UserKnownHostsFile /dev/null"] );
                        $ssh->login($user, $pass, %params);
                if ($@) {
                warn "Cannot SSH yet. Here's the error message below:\n$@Waiting 30 seconds for SSH daemon to come up.\n";
                sleep 30;

Wednesday, June 18, 2014

Setting up Windows 2012 Jenkins slave : Part 2

This is the second update on my experiences setting up a Windows 2012 Jenkins slave, where other Jenkins instances have been Unix based.

Let me just say that Jenkins isn't as easy to setup on Windows. You probably already knew that.

As an overview, this is the high-level way we write and build code and what is needed for continuous integration:

1) Java 7 and 8
2) Eclipse or IntelliJ for writing code and running tests outside the command line or Jenkins
3) Maven 3x
4) GIT 1.8.4 on Mac, GIT (msysgit 1.9.2) on Windows
5) Jenkins master running on Linux.
6) Jenkins slave agent running as a slave app on Windows
7) Cygwin for windows

Further problems encountered setting up Jenkins on Windows 2012, besides what I detailed in prior post.

1) SH scripts for running tests need to run on Windows
2) Access denied issues when running certain files (e.g BAT)
3) CreateProcess from Jenkins cannot find files to run

The first issue above is that we use SH scripts (in Bash) to setup testing and run our tests after all other packages are built. Windows won't recognize SH scripts as runnable. I decided that since I had Cygwin installed with Bash I could launch these SH scripts from within a BAT file.

The BAT file is basically one line:

C:\cygwin64\bin\bash.exe --login "C:\jenkins\workspace\...\"

I originally had it start with bash, but the process couldn't be found when done through Jenkins, since it was launched through CreateProcess. I solved my 3rd issue above by fully qualifying what process I wanted to kick off under cygwin with the full path of where bash lived, and I was done with my BAT script.

The 2nd issue above was seen in build failures inside of Jenkins runs where I would see access denied. I thought I could change permissions in the POM files where these scripts were launched but that didn't work.

Ultimately, I found that access denied with my scripts was caused by Eclipse. Eclipse wouldn't make some of the new files I created executable. I clicked on properties and updated the scripts to make them fully executable, and had no further issues.

Thursday, June 12, 2014

Reverse SSH tunneling to get around corporate intranet port blocking

Have a port that is blocked between two servers you use? Let's say you use ports above 1024, for example, in the 8000-9000 range for web applications or some other proprietary application you write.

Let's also say you have a Linux instance you SSH to that needs to connect back to you own laptop or another Linux machine, but the port on your local machine cannot be accessed from the remote Linux box unless you have IT open that port, which means bugging them with a ticket.

Here's something I learned the other day.

In my scenario I needed to connect to a web application, running on my local Mac Book Pro, using port 9999 when I was connected via VPN, from a Linux Openstack instance.

Unfortunately, I couldn't connect to the port with a simple program like curl since the port was blocked.

Here are the steps so I could allow SSH to serve up that port from the remote Linux instance back to my Mac.

Detailed Steps:

You need two terminal windows on the machine you need to connect back to. I had two tabs open in the Terminal window on my Mac.

1st terminal window:

     Change this string to what port you need to open >

     ssh -R 9999:localhost:9999 -l [root or whoever you connect as]@[the virtual server you connect to]

     If you don't want to use localhost, replace with server name you want to connect back to.

     You will now be on the remote linux box

2nd terminal window:

     ssh [root or whoever you connect as]@[the virtual server you connect to]

     You will now be on the remote linux box in the 2nd terminal window.

     Try using curl to test the port that you want to connect back to >

     curl https://localhost:9999//index.html --insecure

          Again, replace localhost with whatever server name you'd like to use to connect back to.

Wednesday, April 09, 2014

Show In Tree: Using Artifactory to update Maven pom.xml file

I use Artifactory on a monthly basis, not as often as my daily use of Maven. I did learn something today about using Artifactory that will make my life easier when I need to add further dependencies to a POM file.

As might be known by most Maven users your repository lives under the .m2 directory. Additionally, you may have libraries in your local repository that other users don't currently have.

I made the mistake of presuming that a library was in everyone else's .m2 branch when I added some new code that relied on some less common libraries.

The build failed.

Luckily, the library was in Artifactory.

A very convenient shortcut to update my POM was to find the library in Artifactory, hover over the library, select from the pop-up "Show In Tree," copy the string and paste in the POM file.

Here's a screenshot of what I'm talking about.

Tuesday, February 25, 2014

Setting up a Windows 2012 Jenkins slave from a Linux Jenkins master

There are some caveats to setting up a Jenkins slave on Windows (e.g. Windows 2012) from a Linux master box.

To accommodate building on Windows 2012 from a master Jenkins server (where you may already have linux master to linux slave) I'd advise this high-level setup within the Jenkins master.

     1) Two JDKs defined in the Jenkins config: one for linux, the other for windows 2012.
     2) Two Perforces (or whatever your SCM solution is) defined under Jenkins config too: one for linux, and one for windows 2012.

Also, when you set up a JDK and Perforce in Jenkins for Windows you'll get Jenkins errors in a couple of places in the Jenkins web pages, which you'll need to disregard.

This will most likely be in the paths that will have forward slashes (/) instead of the Windows backslashes (\) to both the JDK and Perforce directories. Yes, you need to use forward slashes instead of the Windows backslashes for some of these paths.

Most likely, after these step, you'll use the JDK and SCM setups in a Maven build project.

In your Maven build project, make sure that:

     1) Your root POM for windows uses backward slashes
          a) for example: \some-directory\maven-directory\pom.xml
     2) Your archive directory also uses backward slashes
          a) for example: **\some-directory\assembly\target\archive-file*.zip

Again, you'll also encounter Jenkins error messages when setting up these two items in source code management. You can also disregard that too.

Ultimately, the best way to make sure things are working is actually run a build, look at the log and disregard some of the misplaced warnings/error messages.

Saturday, January 11, 2014

Installing Apache on Ubuntu: apt-get versus doing a classic configure through make install process

We've been creating new virtual servers at work using Ubuntu snapshots on Openstack.

Most of these new instances are, unfortunately, bare bones. This is due to Openstack being new to most of us; and as our config engineers start to create more robust template snapshots - hopefully won't be as bare-bones going forward.

I was told by my manager during the initial setup that I shouldn't use apt-get for the installation of Jenkins. I was curious about that but didn't inquire at the time.

A day ago I did try using apt-get after my initial build of Apache 2.2 ran into issues when I enabled all modules.

At the initial install using apt-get I noted that the default install directory was under /etc/apache2.

Another unconventional difference was putting the port where the apache HTTP server runs into the ports.conf file under the root directory. I tried to override this by updating httpd.conf but that was not read when I restarted.

Finally I realized I had to use a2enmod command to install the rewrite module and then use the service command to restart Apache.

Ultimately, after all these quirky changes for Apache on Ubuntu, I tried to rebuild using the tar distribution I built earlier.

This time I specified all the modules that I wanted when I did configure on the command line [in addition to those supplied by default], ran make and did a make install. I was then able to start Apache and configure everything in httpd.conf under the conf directory.

Sometimes going back to the classic installation method is the best way to go.

Wednesday, January 08, 2014

Eclipse > Customizing the project explorer view. Not showing external libraries in your root directory

I do almost all of my automation development in Eclipse these days.

I usually import most projects through the Eclipse maven plugin, and then I can check out/in files through the Perforce plugin.

Before I build in Eclipse I usually run mvn clean install -Dmaven.test.skip=true, after I've gotten the latest changes from the depot.

The most recent time I did an mvn eclipse:eclipse -DdownloadSources=true build that reset my Eclipse project explorer view and one project I used started to show all the imported JAR files in the root directory making it bit cumbersome to use.

I was helped by a colleague to customize my view with the downward caret looking symbol to the right of the project explorer view.

Here are some screenshots:

After selecting the Customize View, scroll down to Libraries from Extenal and select the associated checkbox, then save.

This should resolve all the JARs showing in your root directory.