Pages

Saturday, March 21, 2015

DNS Caching and Forwarding with Unbound

This howto shows the steps needed to configure unbound for DNS caching and forwarding from the 192.168.1.0/24 network. It assumes the server’s IP address is 192.168.1.22 and is running RHEL/CentOS 7.

Installation

[root@rhce-server ~]# yum install unbound

Configure Systemd

[root@rhce-server ~]# systemctl enable unbound
ln -s '/usr/lib/systemd/system/unbound.service' '/etc/systemd/system/multi-user.target.wants/unbound.service'
[root@rhce-server ~]# ^enable^start
systemctl start unbound

Configure the Firewall

[root@rhce-server ~]# firewall-cmd --add-service=dns
success
[root@rhce-server ~]# firewall-cmd --add-service=dns --permanent
success

Configure Unbound

Unbound’s configuration is stored in /etc/unbound/unbound.conf.
By default unbound only listens on the loopback interface. Specify which interface you would like to use.
interface: 192.168.1.22
Allow queries from 192.168.1.0/24.
access-control: 192.168.1.0/24 allow
Disable DNSSEC.
domain-insecure: *
Forward uncached requests to OpenDNS.
forward-zone:
    name: *
    forward-addr: 208.67.222.222
    forward-addr: 208.67.220.220

Check Your Configuration

[root@rhce-server ~]# unbound-checkconf 
unbound-checkconf: no errors in /etc/unbound/unbound.conf

Restart the Unbound Service

[root@rhce-server ~]# systemctl restart unbound

Verify it is Working

Test from a different system on the network.
mooose:~ jglemza$ dig fark.com A @192.168.1.22

; <<>> DiG 9.8.3-P1 <<>> fark.com A @192.168.1.22
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60299
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;fark.com.          IN  A

;; ANSWER SECTION:
fark.com.       43200   IN  A   64.191.171.200

;; Query time: 234 msec
;; SERVER: 192.168.1.22#53(192.168.1.22)
;; WHEN: Sat Mar 21 13:16:54 2015
;; MSG SIZE  rcvd: 42
Verify the record is now in unbound’s cache.
[root@rhce-server ~]# unbound-control dump_cache|grep fark.com
ns2.fark.com.   43197   IN  A   23.253.56.58
fark.com.   43197   IN  A   64.191.171.200
ns1.fark.com.   43197   IN  A   64.191.171.194
fark.com.   43197   IN  NS  ns1.fark.com.
fark.com.   43197   IN  NS  ns2.fark.com.
...

RHEL / CentOS 7 Network Teaming

Below is an example on how to configure network teaming on RHEL/CentOS 7. It is assumed that you have at least two interface cards.

Show Current Network Interfaces

[root@rhce-server ~]$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 00:0c:29:69:bf:87 brd ff:ff:ff:ff:ff:ff
3: eno33554984: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
    link/ether 00:0c:29:69:bf:91 brd ff:ff:ff:ff:ff:ff
The two devices I will be teaming are eno33554984 and eno16777736.

Create the Team Interface

[root@rhce-server ~]$ nmcli connection add type team con-name team0 ifname team0 config '{"runner": {"name": "activebackup"}}'
This will configure the interface for activebackup. Other runners include broadcast, roundrobin, loadbalance, and lacp.

Configure team0’s IP Address

[root@rhce-server ~]# nmcli connection modify team0 ipv4.addresses 192.168.1.22/24
[root@rhce-server ~]# nmcli connection modify team0 ipv4.method manual
You can also configure IPv6 address by setting the ipv6.addresses field.

Configure the Team Slaves

[root@rhce-server ~]# nmcli connection add type team-slave con-name team0-slave1 ifname eno33554984 master team0 
Connection 'team0-slave1' (4167ea50-7d3a-4024-98e1-3058a4dcf0fa) successfully added.
[root@rhce-server ~]# nmcli connection add type team-slave con-name team0-slave2 ifname eno16777736 master team0 
Connection 'team0-slave2' (d5ed65d1-16a7-4bc7-8c4d-78e17a1ed8b3) successfully added.

Check the Connection

[root@rhce-server ~]# teamdctl team0 state
setup:
  runner: activebackup
ports:
  eno16777736
    link watches:
      link summary: up
          instance[link_watch_0]:
        name: ethtool
        link: up
  eno33554984
    link watches:
      link summary: up
      instance[link_watch_0]:
       name: ethtool
        link: up
runner:
  active port: eno16777736

[root@rhce-server ~]# ping -I team0 192.168.1.1
PING 192.168.1.1 (192.168.1.1) from 192.168.1.24 team0: 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.38 ms
...

Test Failover

[root@rhce-server ~]# nmcli device disconnect eno16777736
[root@rhce-server ~]# teamdctl team0 state
setup:
  runner: activebackup
ports:
  eno33554984
    link watches:
      link summary: up
      instance[link_watch_0]:
        name: ethtool
        link: up
runner:
  active port: eno33554984

Configuring Postfix as a Local Network Relay

This howto assumes that the relay server’s IP address is 192.168.1.22 and is running RHEL/CentOS 7. Only mail from the 192.168.1.0/24 network should be accepted and relayed.

Install Postfix

[root@rhce-server ~]# yum install postfix

Configure Systemd

[root@rhce-server ~]# systemctl enable postfix
[root@rhce-server ~]# ^enable^start

Configure the Firewall

[root@rhce-server ~]# firewall-cmd --add-service=smtp
success
[root@rhce-server ~]# firewall-cmd --add-service=smtp --permanent 
success

Configure Postfix

Postfix’s main configuration file is located at /etc/postfix/main.cf.
Configure Postfix to listen on the correct interface.
inet_interfaces = all
Configure the trusted network.
mynetworks = 192.168.1.0/24
Configure the list of domains that this Postfix service should consider itself the final destination for. In my case the server is named rhce-server.
mydestination = rhce-server, localhost.localdomain, localhost
Configure all mail not destined for this server to be relayed to another SMTP server. I am using Time Warner Cable’s SMTP server for Northeast Ohio. The brackets tell Postfix to turn off MX lookups.
relayhost = [smtp-server.neo.rr.com]

Restart Postfix

[root@rhce-server postfix]# systemctl restart postfix

Send a Test Email

[root@rhce-server postfix]# mail -s "rhce-server test" josh@example.com
testing our null postfix configuration
.
EOT
With any luck we should be all set. You can verify the mail was successfully relayed in /var/log/maillog.

Tuesday, February 10, 2015

Running pfSense in Proxmox/KVM with PCI Passthrough

Below is how I was able to get pfSense 2.2 running under Proxmox 3.3 with PCI passthrough for two Intel NICs. My first attempts were trying to utilize VirtIO and e1000 network devices but the performance was abysmal. With PCI passthrough I was able to achieve native throughput in my environment.
I am assuming that you have Proxmox running and a pfSense virtual machine already created.

Configure the Proxmox Test Repository

The first thing we need to do is enable the Proxmox test repository so that we may install the 3.10 kernel.
echo 'deb http://download.proxmox.com/debian wheezy pvetest' >> /etc/apt/sources.list

Install the 3.10 Kernel

apt-get install pve-kernel-3.10.0-6

Edit Grub Configuration

We need to pass a kernel flag to enable IOMMU. In my case I am using an AMD processor and added amd_iommu=on to the following line in /etc/default/grub. If you are using an Intel processor you would add intel_iommu=on.
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on"
Update the Grub configuration
update-grub
Reboot the server. By default the 3.10 kernel should be selected.

Identify Your NICs

Identify the PCI devices you want to passthrough to your virtual machine.
lspci
In my case I was looking for my Intel NICs.
lspci | grep Intel

03:00.0 Ethernet controller: Intel Corporation 82541PI Gigabit Ethernet Controller (rev 05)
04:00.0 Ethernet controller: Intel Corporation 82541PI Gigabit Ethernet Controller (rev 05)
You will need to note their addresses.

Edit the Virtual Machine Configuration

Below is an example of my working configuration. You can find these configurations in /etc/pve/qemu-server/. The file you are looking for will correspond with the virtual machine ID. In my case 100.conf.
boot: cdn
bootdisk: ide0
cores: 2
cpu: qemu32
hostpci0: 03:00.0,pcie=1,driver=vfio
hostpci1: 04:00.0,pcie=1,driver=vfio
ide0: local:100/vm-100-disk-1.qcow2,format=qcow2,size=16G
ide2: local:iso/pfSense-LiveCD-2.2-RELEASE-i386.iso,media=cdrom,size=206916K
machine: pc-q35-2.0
memory: 1024
name: pfSense
onboot: 1
ostype: other
smbios1: uuid=0f590e3e-88a0-4084-8a6f-f5a2380a01fa
sockets: 2
tablet: 0
Notice that I added the hostpci0, hostpci1, and machine options. The hostpciX options identify which PCI devices we want to passthrough. As we found above I was looking for my NICs at 03:00.0 and 04:00.0. The machine must be set to pc-q35-2.0 for PCI passthrough to work with FreeBSD from my experience.

Conclusion

With those options set you should be able to boot your pfSense virtual machine and see your PCI devices natively.

Wednesday, January 7, 2015

Using Apache as a Reverse Proxy with SiteMinder for Authentication

When using Apache as a reverse proxy to pass authentication to another application from SiteMinder you may need to send the header variable to your application. By default Apache is not passing these variables on and you need to set them up for your proxy.

RequestHeader set REMOTE_USER %{HTTP:UID}s

Above is an example of the variable UID being returned by SiteMinder and being assigned to the variable REMOTE_USER that will be provided to the end application.

Wednesday, February 19, 2014

Detect a New Disk and Grow a Filesystem in Linux without Rebooting

If you have ever wanted to add a new hard disk to a Linux VMware guest without rebooting here is the solution. After running this command you should be able to use the new disk as you please. This is really useful for growing an LVM filesystem on the fly.
Run this command after adding the new disk to you VMware configuration. This will allow your Linux guest to detect the new disk and assign it as a device.
ls /sys/class/scsi_host/ | while read host ; do echo "- - -" > /sys/class/scsi_host/$host/scan ; done
Once you have ran this command you can grow the LVM filesystem as usual. (Where X is the new device added.)
# Create a new partition on the new disk. Assign it the type of Linux LVM, 8e.
fdisk /dev/sdX

# Create the new physical volume.
pvcreate /dev/sdX1

# Extend the volume group of your choice.
vgextend /dev/VolumeGroupName /dev/sdX1

# Confirm the new physical extents are available and take note of the number.
vgdisplay

# Extend a logical volume where 1234 is the number of PEs you would like to add.
lvextend -l +1234 /dev/VolumeGroupName/LogVolName

# Resize the EXT filesystem.
resize2fs /dev/VolumeGroupName/LogVolName

# Confirm the new space is available.
df -h
Once complete you should have extra space on your Linux guest.

Sunday, February 16, 2014

Recursive Directory/File Permissions

I’m constantly needing to recursively set different permissions on files and directories to make suPHP happy. I’m posting this here for easy reference.
# Directories
find . -type d -exec chmod 755 {} +

# Files
find . -type f -exec chmod 644 {} +