Main Contents

Teamspeak 3 on CentOS 6

May 17, 2012

Create a user account for the Teamspeak service to run under

# useradd teamspeak
# passwd teamspeak

To use a MySQL database, you need to install additional libraries not available from the default repositories. Download MySQL-shared-compat-6.0.11-0.rhel5.x86_64.rpm (This is 64 bit version. If you are on a 32 bit system, you’ll need to find it somewhere) and install

# yum localinstall MySQL-shared-compat-6.0.11-0.rhel5.x86_64.rpm

If you are going to use a MySQL database, and assuming you already have a functional MySQL installation, create a database and user for Teamspeak:

# mysql
mysql> create database ts3db;
mysql> grant all on ts3db.* to 'ts3user'@'localhost' identified by 'ts3password';
mysql> flush privileges;

Create a init script for Teamspeak at /etc/init.d/teamspeak

#!/bin/bash
# /etc/init.d/teamspeak
# version 0.3.6 2011-10-17 (YYYY-MM-DD)

### BEGIN INIT INFO
# Provides:   teamspeak
# Required-Start: $local_fs $remote_fs
# Required-Stop:  $local_fs $remote_fs
# Should-Start:   $network
# Should-Stop:    $network
# Default-Start:  2 3 4 5
# Default-Stop:   0 1 6
# Short-Description:    Teamspeak 3 Server
# chkconfig: 2345 94 05
# Description:    Starts the Teamspeak 3 server
### END INIT INFO

#Settings
SERVICENAME='Teamspeak 3'
SPATH='/home/teamspeak/teamspeak3-server_linux-amd64'
SERVICE='/home/teamspeak/teamspeak3-server_linux-amd64/ts3server_startscript.sh'
OPTIONS='inifile=ts3server.ini'
USERNAME='teamspeak'

ME=`whoami`
as_user() {
  if [ $ME == $USERNAME ] ; then
    bash -c "$1"
  else
    su - $USERNAME -c "$1"
  fi
}

mc_start() {
    echo "Starting $SERVICENAME..."
    cd $SPATH
    as_user "cd $SPATH && $SERVICE start ${OPTIONS}"
}

mc_stop() {
    echo "Stopping $SERVICENAME"
    as_user "$SERVICE stop"
}

#Start-Stop here
case "$1" in
  start)
    mc_start
    ;;
  stop)
    mc_stop
    ;;
  restart)
    mc_stop
    mc_start
    ;;
  *)
  echo "Usage: /etc/init.d/teamspeak {start|stop|restart}"
  exit 1
  ;;
esac

exit 0

Now login as the teamspeak user, download Teamspeak Server 3 64-bit for Linux and extract in your home directoy

$ tar -xf teamspeak3-server_linux-amd64-3.0.5.tar.gz
$ cd teamspeak3-server_linux-amd64

If you are using MySQL, create a file called ts3server.ini which contains:

machine_id=
default_voice_port=9987
voice_ip=0.0.0.0
licensepath=
filetransfer_port=30033
filetransfer_ip=0.0.0.0
query_port=10011
query_ip=0.0.0.0
query_ip_whitelist=query_ip_whitelist.txt
query_ip_blacklist=query_ip_blacklist.txt
dbplugin=ts3db_mysql
dbpluginparameter=ts3db_mysql.ini
dbsqlpath=sql/
dbsqlcreatepath=create_mysql/
dbconnections=10
logpath=logs
logquerycommands=0
dbclientkeepdays=30
logappend=0

If you NOT using MySQL, and using sqlite instead, create a file called ts3server.ini which contains:

machine_id=
default_voice_port=9987
voice_ip=0.0.0.0
licensepath=
filetransfer_port=30033
filetransfer_ip=0.0.0.0
query_port=10011
query_ip=0.0.0.0
query_ip_whitelist=query_ip_whitelist.txt
query_ip_blacklist=query_ip_blacklist.txt
dbplugin=ts3db_sqlite3
dbpluginparameter=
dbsqlpath=sql/
dbsqlcreatepath=create_sqlite/
dbconnections=10
logpath=logs
logquerycommands=0
dbclientkeepdays=30
logappend=0

If you are using MySQL, create a file called ts3db_mysql.ini which contains:

[config]
host=localhost
port=3306
username=ts3user
password=ts3password
database=ts3db
socket=

Start Teamspeak

$ ./ts3server_startscript.sh start inifile=ts3server.ini

You should get a message about the Server Query Admin account created - take note of the loginname and password. Stop the server with

$ ./ts3server_startscript.sh stop

Check the logs in the logs directory. If everything is OK, log back in as root, enable the service and start it

# chmod +x /etc/init.d/teamspeak
# chkconfig --add teamspeak
# chkconfig teamspeak on
# service teamspeak start

Filed under: Uncategorized | Comments Off

Piranha on CentOS 6

February 3, 2012

I needed to set up a load balanced and redundant solution for a squid proxy server. My primary goals were simplicity and redundancy. I started with a stand-alone squid proxy server (CentOS 6, squid, NTLM authentication). Alone, this works great for an AD environment; single sign-on authentication so no password prompt, integrates with AD groups for various access levels, auto-proxy config using DHCP and DNS, etc. (anyway, I should post this config later).

First step was building an identical squid server (I just cloned the vm, changed the name and IP). I then set up a unison cron job to sync the squid configs, so any change on one would propogate to the other:

* * * * * /usr/bin/unison /etc/squid ssh://proxy2//etc/squid -batch >> /dev/null 2>&1

Once I had two functional proxy servers, my first attempt to add redundancy was relying on most browsers ability to switch to a second proxy server if the first goes down. I used an auto-configuration script to push out these settings. Also note I incorporated IP-based load balancing as well. In the proxy.pac:

function FindProxyForURL(url, host) {
 var proxy1="proxy1:8080"
 var proxy2="proxy2:8080"
 var myip=myIpAddress()
 var ipbits=myip.split(".")
 var myseg=parseInt(ipbits[3])
 if(myseg==Math.floor(myseg/2)*2) {
  var proxone=proxy1
  var proxtwo=proxy2
 }
 else {
  var proxone=proxy2
  var proxtwo=proxy1
 }
 return "PROXY "+proxone+"; PROXY "+proxtwo+";";
}

This works, but the problem I discovered is some browsers can take a long time to determine if a proxy is down - about 15 seconds. And some browsers check for every host. Most website these days have links, images, ads to several different hosts, so just loading a singe home page can literally take minutes if one of the proxy servers is down. And, if you close your browser and re-open it, it has to check all over again.

Then I thought I would try a real load-balancing solution. I found Piranha which is Red-Hat’s load-balancing solution with optional redundancy. It’s very similiar to Microsft’s NLB for those familiar with it, but my biggest complaint about it is it typically is designed to run on separate servers than your proxy servers (or whatever IP service you are load balancing). That means adding two more servers to the mix. I wanted simple, and turning two servers into four isn’t my kind of simple. So, why not try to get Piranha running on the proxy servers themselves? Here’s my layout:

proxy1: 10.120.100.60
proxy2: 10.120.100.61
Squid port: 8080
Available IP address to be used as the virtual IP: 10.120.100.62

I installed Piranha on each server:

# yum install piranha

and set the Piranha password:

# piranha-passwd

I had to enable iptables; I had turned it off initially. I turned it on and wiped the config - of course, don’t wipe yours if you are using it for other stuff! (more on why we need iptables later):

# chkconfig iptables on
# service iptables start
# iptables -F
# iptables -t nat -F
# iptables -t mangle -F
# iptables -X
# service iptables save

Enable IP forwarding (not sure this is required for our all-in-one design, but it’s standard for Piranha configs anyway).

# sysctl -w net.ipv4.ip_forward=1

To make this peristent across reboots, edit that line in /etc/sysctl.conf

Turned on the web-based Piranha config on the first server:

# service piranha-gui start

The Piranha GUI runs a web server on port 3636. Browse to it and login. Username is “piranha” and the password that you set earlier:
Piranha GUI Login
On the GLOBAL SETTINGS page, set the Primary server public IP to first server’s IP address. Network type should be Direct Routing. Private IP should be blank.
Piranha Global Settings
On the REDUNDANCY page, plug in the IP address of your second server
Piranha Redundancy
Add a virtual server. Use an available unused IP address on the same subnet as your Piranha servers.
Piranha Virtual Servers
Piranha Virtual Server Details
Add two Real Servers, which in this case are the same as your Piranha servers. Use their IP addresses, and you can leave the port blank, since that was defined on the virtual server.
Piranha Real Servers
Piranha Real Server Details
In my case, using squid, the default Monitoring Scripts work since a HTTP GET provides a response.

Now that the GUI config is done, copy the config from server 1 to server 2:

# scp /etc/sysconfig/ha/lvs.cf proxy2:/etc/sysconfig/ha/lvs.cf

The config need to be identical on both servers. For reference, here’s what mine looks like:

serial_no = 26
primary = 10.120.100.60
service = lvs
backup_active = 1
backup = 10.120.100.61
heartbeat = 1
heartbeat_port = 539
keepalive = 3
deadtime = 6
network = direct
debug_level = NONE
monitor_links = 1
syncdaemon = 0
virtual proxy {
     active = 1
     address = 10.120.100.62 eth0:1
     vip_nmask = 255.255.255.0
     port = 8080
     persistent = 60
     send = "GET / HTTP/1.0\r\n\r\n"
     expect = "HTTP"
     use_regex = 0
     load_monitor = none
     scheduler = wlc
     protocol = tcp
     timeout = 6
     reentry = 15
     quiesce_server = 0
     server proxy1 {
         address = 10.120.100.60
         active = 1
         weight = 1
     }
     server proxy2 {
         address = 10.120.100.61
         active = 1
         weight = 1
     }
}

Now your ready to start. Lets start up the primary server first. One server 1:

# chkconfig pulse on
# service pulse start

Watch /var/log/messages for the following:

Feb  3 10:33:10 proxy1 pulse[2850]: STARTING PULSE AS MASTER
Feb  3 10:33:13 proxy1 pulse[2850]: partner dead: activating lvs
Feb  3 10:33:13 proxy1 lvs[2853]: starting virtual service Proxy active: 8080
Feb  3 10:33:13 proxy1 lvs[2853]: create_monitor for Proxy/proxy1 running as pid 2861
Feb  3 10:33:13 proxy1 lvs[2853]: create_monitor for Proxy/proxy2 running as pid 2862
Feb  3 10:33:13 proxy1 nanny[2862]: starting LVS client monitor for 10.120.100.62:8080 -> 10.120.100.61:8080
Feb  3 10:33:13 proxy1 nanny[2861]: starting LVS client monitor for 10.120.100.62:8080 -> 10.120.100.60:8080
Feb  3 10:33:14 proxy1 nanny[2861]: [ active ] making 10.120.100.60:8080 available
Feb  3 10:33:14 proxy1 nanny[2862]: [ active ] making 10.120.100.61:8080 available
Feb  3 10:33:14 proxy1 ntpd[1528]: Listening on interface #7 eth0:1, 10.120.100.62#123 Enabled
Feb  3 10:33:18 proxy1 pulse[2855]: gratuitous lvs arps finished

ifconfig should show the virtual IP address bound to eth0:1

eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:C3:50:80
          inet addr:10.120.100.62  Bcast:10.120.100.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

Start pulse on the backup server:

# chkconfig pulse on
# service pulse start

all you should see in /var/log/messages is:

Feb  3 10:35:45 proxy2 pulse[2820]: STARTING PULSE AS BACKUP

If you tried to use your new Virtual IP address at this point, it won’t work. We need to do some iptables work. On each server:

# iptables -t nat -A PREROUTING -p tcp -d 10.120.100.62 --dport 8080 -j REDIRECT
# service iptables save

That’s it. Now test everything, make sure your clients still work while you stop/start pulse, stop/start squid, reboot, etc. Worst case, clients may be unable to use the service for a few seconds. If you ever make changes, be sure and copy the lvs.cf to the other server and then reload pulse. Unfortunately, the GUI does not provide a method to copy the configs and reload the services.

Notes:
I was using VMware 4.1 as the host servers. Originally, I could not get the Redundancy heartbeat monitor to work - both servers could not see each other, so they both became active, assigning themselves the same IP address. Turns out, I had to do two things to the VM guests:

  1. Use the E1000 Network card instead of the VMXNET.
  2. add “pci=nomsi” to the kernel line in /boot/grub/grub.conf (maybe - not sure if this did anything)

You may notice that even after you stop pulse on both servers, your clients can still connect? Why? iptables is still listening to that virtual IP address, even if it’s not bound to an adapter. And as long as your clients and/or switch has the mac-address cached, traffic will still be sent to the last known port with that IP.

Filed under: Linux, Network, vmware | Comments Off

Homemade E-Mail Server Using CentOS + Postfix + Courier + More

February 23, 2011

A while back I built an e-mail server for a company. Using CentOS, Postfix, Courier and MySQL, it ended up being very functional, supporting SMTP, POP3, IMAP, SSL, Webmail and more. Outlook is the primary desktop client used by the company, iPhones and Androids are used, and I also used Roundcube for webmail access. The majority of the configuration was done using a guide by Michael Bowe, found here, with a few tweaks as needed. One item missing from his guide was a good tool to manage e-mail boxes, so I created a web-based tool in php. In case his website ever goes down, I’m attaching his original guide and I’m also attaching my web management tool.
PHP Web Management Tool
Original Guide by Michael Bowe

Filed under: Linux | Comments Off