Thursday, July 28, 2011
Optimize Solaris's TCP for internet services
Tuesday, July 26, 2011
High-Availability Storage With GlusterFS On Ubuntu
This tutorial shows how to set up a high-availability storage with two storage servers (Ubuntu 10.04) that use GlusterFS . Each storage server will be a mirror of the other storage server, and files will be replicated automatically across both storage servers. The client system (Ubuntu 10.04 as well) will be able to access the storage as if it was a local filesystem. GlusterFS is a clustered file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86_64 servers with SATA-II RAID and Infiniband HBA.
1. Preliminary Note
2. Setting Up The GlusterFS Servers
3. Setting Up The GlusterFS Client
4. Testing
Read more http://blogmee.info/index.php/high-availability-storage-with-glusterfs-on-ubuntu/
Monday, July 25, 2011
Optimized Disk I/O in Solaris Unix
- POSIX: Application calls a POSIX library interface. (These frequently map directly to system calls, except for the asynchronous interfaces. These latter work via
pread
andpwrite
.) - System Call
- VOP
- Filesystems
- Physical Disk I/O
Disk Utilization
Disk Saturation
Usage Pattern
Disk Errors
Filesystem Performance
Filesystem Caching
Inode Cache
Buffer Cache
Inodes
Physical I/O
Direct I/O
Solaris Kernel Tuning
sysdef -i
reports on several system resource limits. Other parameters can be checked on a running system using adb -k
:adb -k /dev/ksyms /dev/mem
parameter-name/D
^D
(to exit)
Tuesday, July 19, 2011
Zettabyte file system - ZFS Management
ZFS was first publicly released in the 6/2006 distribution of Solaris 10. Previous versions of Solaris 10 did not include ZFS . ZFS is flexible, scalable and reliable. It is a POSIX-compliant filesystem with several important features:
- integrated storage pool management
- data protection and consistency, including RAID
- integrated management for mounts and NFS sharing
- scrubbing and data integrity protection
- snapshots and clones
- advanced backup and restore features
- excellent scalability
- built-in compression
- maintenance and troubleshooting capabilities
- automatic sharing of disk space and I/O bandwidth across disk devices in a pool
- endian neutrality
No separate filesystem creation step is required. The mount of the filesystem is automatic and does not require vfstab maintenance. Mounts are controlled via the mountpoint
attribute of each file system.
Pool Management
Filesystem Management
RAID Levels
Performance Monitoring
Snapshots and Clones
Zones
Data Protection
Hardware Maintenance
Troubleshooting ZFS
Scalability
ZFS Recommendations
Sun Cluster Integration
ZFS Internals
Read more http://blogmee.info/index.php/zettabyte-file-system-zfs-management/
Solaris Process Scheduling
ps -elcL
.Kernel Threads Model
Time Slicing for TS and IA
Solaris Performance Monitoring by SAR Command ( System Activity Report )
sa1
program stores performance data in the /var/adm/sa
directory. sa2
writes reports from this data, and sadc
is a more general version ofsa1
.sa2
-produced reports are terribly useful in most cases. Depending on the issue being examined, it may be sufficient to run sa1
at intervals that can be set in the sys crontab.sar
can be used on the command line to look at performance over different time slices or over a constricted period of time:# sar -A -o
outfile
5 2000
System Activity Reporter
Sunday, July 17, 2011
Solaris monitoring - Command DTrace
dtrace
and lockstat
, as well as programs calling libraries that access DTrace through the dtrace
kernel driver.Read more http://blogmee.info/index.php/solaris-monitoring-command-dtrace/
Solaris Performance Monitoring and Tuning
iostat , vmstat and netstat are three most commonly used tools for performance monitoring . These comes built in with the operating system and are easy to use .iostat stands for input output statistics and reports statistics for i/o devices such as disk drives . vmstat gives the statistics for virtual Memory and netstat gives the network statstics .
Following paragraphs describes these tools and their usage for performance monitoring.
Table of content :
1. Iostat
* Syntax
* example
* Result and Solutions
2. vmstat
* syntax
* example
* Result and Solutions
3. netstat
* syntax
* example
* Result and Solutions
Read more http://blogmee.info/index.php/solaris-performance-monitoring-and-tuning
Friday, July 15, 2011
Monitor connections in Server by Netstat command
Whenever a client connects to a server via network, a connection is established and opened on the system. On a busy high load server, the number of connections connected to the server can be run into large amount till hundreds if not thousands. Find out and get a list of connections on the server by each node, client or IP address is useful for system scaling planning, and in most cases, detect and determine whether a web server is under DoS or DDoS attack (Distributed Denial of Service), where an IP sends large amount of connections to the server. To check connection numbers on the server, administrators and webmasters can make use of netstat command.
Below is some of the example a typically use command syntax for ‘netstat’ to check and show the number of connections a server has. Users can also use ‘man netstat’ command to get detailed netstat help and manual where there are lots of configurable options and flags to get meaningful lists and results.
Read more http://blogmee.info/index.php/monitor-connections-in-server-by-netstat-command/
Create a Linux software RAID array
First thing, the mdadm utility is needed.
# apt-get install mdadm
will grab this for you.
Next, we’ll need some disk partitions. These can be on the same physical disk (mdadm may gripe about this), good for testing, but for “real” data, use partitions on separate physical disks.
Read more http://blogmee.info/index.php/create-a-linux-software-raid-array/
Using Command Vi
if you use Unix or other LINUX systems, you'll probably have to learn how to use it eventually . When you first start vi, you are in command mode. If you are ever unsure which mode your in, hit <esc> and you will be in command mode for sure
Search Functions
Move and Insert Text
Save Files and Exit
Control Edit Session
Screen/Line Movement
Word Movement
Search Functions
Delete Text
Cancel Edit Function
Copy and Insert Text
Add/Append Text
Add New Lines
Search Functions
Change Text
Read more http://blogmee.info/index.php/used-command-vi/
Purge (Rotate) Apache logs using Awstats
This post will show how you can rotate the apache logs using awstats right after it has processed the logs. This can be beneficial for situations where you have quite big logs and using this method will keep them small all the time, and also where restarting apache just for logrotating is not such a good idea. Obviously for this to make sense, you need to be already using awstats for your log processing
The awstats config option PurgeLogFile will purge the log file, after analyzing it. The default is 0 and this means no purge is done, and will assume some external tool is used for log rotation (like logrotate, or even apache internal mechanisms like rotatelogs). If this is set to 1, awstats will purge of the log file everytime it is run:
Read more http://blogmee.info/index.php/purge-rotate-apache-logs-using-awstats/
How to ignore some files or folders from awstats reports
NotPageList="css js class gif jpg jpeg png bmp ico"
(this is the default). All other file types will be counted as pages. Now, if we want to completely ignore some files, or even all the content of one folder from the awstats processing we can use the SkipFiles parameter. We might want to do this to ignore some frames, hidden pages, ajax calls, etc.
Read more http://blogmee.info/index.php/how-to-ignore-some-files-or-folders-from-awstats-reports/
Monitoring Real-Time traffic Apache in FreeBSD
An indepth analysis of the log files is great, but sometime you just want to see what is happening on your web sites at the moment. In this tutorial we will go over some of the ways and show you how to set them up.
Using apachetop
Apachetop is a very useful and small program that displays the stats for Apache as they happen. It can tell you how many requests per second are coming in, what files have been accessed in the last set amount of time, and how many times. It can also show you who is hitting the sites and where they are coming from. It can be downloaded here or installed from the ports with
Using multitail to watch the logs
Multitail is a program which shows you the tail of several files on the screen at once and automatically scrolls them up as they are updated. Just looking at the log files continously rolling by can be confusing at first, but once you get use to it its easy to pick out important information so you can figure out what is happening on the web server.
Read more http://blogmee.info/index.php/monitoring-real-time-traffic-apache-in-freebsd/
Protecting directories with htaccess in Apache
Apache allows access to directories to be restricted unless overridden by a valid user name and password. Here you will see how to set it up in your config file, how to create the .htaccess file, and how to generate the password file for it.
Denying access in httpd.conf
Creating an .htaccess file
Generating the password file
Read more http://blogmee.info/index.php/analyzing-web-traffic-with-awstats-in-freebsd/
Analyzing web traffic with awstats in Freebsd
Once your web server is up and running it is important to analyze your logs to see what searches are bringing users to the sites, how long they are staying, and what pages they are coming in and going out on. One of the most popular open source tools for this task is awstats.
Installing awstats from the ports
Configuring awstats
Updating the stats
Updating multiple sites
Viewing the stats
Read more http://blogmee.info/index.php/analyzing-web-traffic-with-awstats-in-freebsd/
Configure mod deflate for Apache 2.x in Freebsd
When you get a good number of visitors on your website, you can end up with a rather large Internet bandwidth bill from your web hosting company. This is usually a good problem to have as it means your website is generating traffic. However, there are some steps you can take to try to optimize your website so that it consumes less bandwidth per user. There are a number of ways to do this. Let's look at one such solution for Apache. We'll learn how to setup and use the Apache module mod_deflate in FreeBSD
This module adds an output filter that allows output from your server to be compressed. A great side effect of the implementation of this module is that it also speeds up your website.
Mod Deflate comes built into Apache, but is not enabled by default. This tutorial will explain the simplest way of enabling it and setting which mime times to compress.
Note : Mod Deflate will increase your server load, but decreases the amount of time that clients are connected and can usually reduce the page size by 60 to 80 percent.
Read more http://blogmee.info/index.php/configure-mod-deflate-for-apache-2-x-in-freebsd/
Thursday, July 14, 2011
Methos to reduce the load webserver by Caching content : using Lighttpd
If you ever have developed a website which has tasted a bit of success and has reached to serious problem such as overload server . We try a Methos to resolve the problem by Caching content in Lighttpd with mod_cache and mod_proxy :
Mod_cache provides a shared web cache for mod_proxy and uses a configuration similar to Squid Cache. mod_cache has several key benefits:
- Simple : mod_cache sets lighttpd flags between request handling stages. The request is then handled by mod_staticfile, mod_proxy, or other modules.
- Robust : mod_cache stores caches to the filesystem rather than to memory in order to avoid memory leaks/exhaustion.
- Fast : Lighttpd uses the Sendfile system call, which writes the file to the network interface directly.
- Powerful : mod_cache can be used in conjunction with other lighttpd plugins (except mod_deflate and mod_secdownload). For example; using mod_compress to compress cached files, using mod_access/mod_trigger_b4_dl to implement anti-hot-link restrictions, or using mod_flv_streaming to do native flv file streaming.
With the shared web cache, mod_proxy is able to deliver content from the local cache without having to re-download bulky files, thus simultaneously increasing the speed from the user's perspective while reducing bandwidth up-stream of the server
Read more http://blogmee.info/index.php/methos-to-reduce-the-load-webserver-by-caching-content-using-lighttpd/
Reduce server load by MySQL caching and optimization
As the web traffic starts growing, the CMS based websites take more time to load and therefore MySQLserver needs to be optimized or at least it should utilize availableserver resources judiciously in order to meet the future traffic demands. Database caching can significantly improve your CMS performance during peak hours. Although the main factor that affectsdatabase performance is how queries have been written but still significant performance boost can be achieved by tweaking MySQL settings. Lets learn how to tweak MySQL settings to optimize and improve its caching in order to get optimum performance and reduce serverload.
The MySQL database server has a configuration file that allows us to change some parameters and configuration settings. The default settings may not solve your purpose, so you need to edit them in order to gain the maximum benefit. The file is called my.ini ormy.cnf and is usually found in /etc/ or /etc/mysql/. You can use vi editor to open the file and edit it.
- query_cache_size
- key_buffer
- table_cache
- sort_buffer
- read_rnd_buffer_size
- thread_cache
- tmp_table_size
Read more http://blogmee.info/index.php/reduce-server-load-by-mysql-caching-and-optimization/
Install PHP5 with apache in Freebsd
Choosing which port to use
In the past there were several ports for PHP such as /www/mod-php5, /lang/php5-cli, and /lang/php5. Since the release of PHP 5.1.14 there is now only /lang/php5 This port now allows you to choose if you want to install the CLI, CGI, and Apache module.
CLI stands for command line interpreter. It is used for running PHP scripts from the command line and makes creating shell scripts very simple if you already know PHP. The Apache PHP Module is disabled by default, so make SURE that if you plan to use this for web work that you enable it.
Installing the port
Configuring PHP
Testing PHP
Read more http://blogmee.info/index.php/install-php5-with-apache-in-freebsd/
Setup Apache2.x in FreeBSD
Apache 2.2 can be installed from the ports with the following commands
# cd /usr/ports/www/apache22
# make install
You will need to add an enable line for Apache to your /etc/rc.conf file
apache22_enable="YES"
Apache installs a start up script in /usr/local/etc/rc.d, but to stop and start the port the apachectl command is used which we will be looking at later when it is time to start the server.
1. Configuring Apache's httpd.conf
2. Loading the accf_http module
3. Starting Apache
4. Adding Virtual Hosts
5. Accessing Virtual Hosting without the Hostname
Read more http://blogmee.info/index.php/setup-apache2-x-in-freebsd/
Save Traffic and Banwidth With Lighttpd's mod_compress
Output compression reduces the network load and can improve the overall throughput of the webserver. All major http-clients support compression by announcing it in the Accept-Encoding header. This is used to negotiate the most suitable compression method. Lighttpd support deflate, gzip and bzip2
Deflate (RFC1950, RFC1951) and gzip (RFC1952) depend on zlib while bzip2 depends on libbzip2. bzip2 is only supported by lynx and some other console text-browsers.
Lighttpd limit to compression support to static files. mod_compress can store compressed files on disk to optimize the compression on a second request away. As soon as compress.cache-dir is set the files are compressed.
(You will need to create the cache directory if it doesn't already exist. The web server will not do this for you. The directory will also need the proper ownership. For Debian/Ubuntu the user and group ids should both be www-data.)
The module limits the compression of files to files smaller than 128 MByte and larger than 128 Byte. The lower limit is set as small files tend to become larger by compressing due to the compression headers, the upper limit is set to work sensibly with memory and cpu-time. In fact, if you are on a low end server, you may get better performance if you disable mod_compress.
Configure mod_compress
Compressing Dynamic Content, examp : PHP
To test
Read more http://blogmee.info/index.php/save-traffic-and-banwidth-with-lighttpds-mod_compress/
Linux Display System Statistics Gathered From /proc
You can write a shell or perl script to grab all info from /proc file system. But Linux comes with the procinfo command to gather some system data from the /proc directory and prints it nicely formatted on the screen.
Install procinfo Command
How Do I Use procinfo Command?
Task: Run procinfo Continuously Full-screen
Read more http://blogmee.info/index.php/linux-display-system-statistics-gathered-from-proc/
What does serial / refresh / retry / expire / minimum / and TTL mean?
Because of the huge volume of requests generated by a system like the DNS, the designers wished to provide a mechanism to reduce the load on individual DNS servers. The mechanism devised provided that when a DNS resolver (i.e. client) received a DNS response, it would cache that response for a given period of time. A value (set by the administrator of the DNS server handing out the response) called the time to live, or TTL defines that period of time. Once a response goes into cache, the resolver will consult its cached (stored) answer; only when the TTL expires (or when an administrator manually flushes the response from the resolver's memory) will the resolver contact the DNS server for the same information.
Generally, the time to live is specified in the Start of Authority (SOA) record. SOA parameters are:
Read more http://blogmee.info/index.php/what-does-serial-refresh-retry-expire-minimum-and-ttl-mean/
Wednesday, July 13, 2011
What is Virtuozzo?
Virtuozzo is a container-based virtualization solution that allows the sharing of hardware, via an abstraction layer. Virtuozzo creates Containers, also known as VE’s or VPS’s, that simulate a server. The container acts and responds mostly as if it were a stand-alone server. The container is completely separate from other containers that are located on the same physical server in that they can not access other containers files, ipc resources, or memory. The network can be configured to be shared between multiple containers or completely isolated.
What is Citrix XenSource ?
XenServer is a server virtualization platform that offers near bare-metal virtualization performance for virtualized server and client operating systems.
XenServer uses the Xen hypervisor to virtualize each server on which it is installed, enabling each to host multiple Virtual Machines simultaneously with guaranteed performance. XenServer also allows you to combine multiple Xen-enabled servers into a powerful Resource Pool, using industry-standard shared storage architectures and leveraging resource clustering technology created by XenSource. In doing so, XenServer extends the basic single-server notion of virtualization to enable seamless virtualization of multiple servers as a Resource Pool, whose storage, memory, CPU and networking resources can be dynamically controlled to deliver optimal performance, increased resiliency and availability, and maximum utilization of data center resources.
XenServer allows IT managers to create multiple clusters of Resource Pools, and to manage them and their resources from a single point of control, reducing complexity and cost, and dramatically simplifying the adoption and utility of a virtualized data center environment. With XenServer, a rack of servers can become a highly available compute cluster that protects key application workloads, leverages industry standard storage architectures, and offers no-downtime maintenance by allowing Virtual Machines to be moved while they are running between machines in the cluster. XenServer extends the most powerful abstraction: virtualization across servers, storage and networking to enable users to realize the full potential of a dynamic, responsive, efficient data center environment for Windows and Linux workloads.
Adding a hard drive to Citrix Xen Server
Adding new hard-drive in Xen-Server is a bit different from the traditional Linux process. For Xen servers, you need to create a container called a 'storage repository' to define a particular storage target (such as a hard disk), in which Virtual Disk Images (VDIs) of VMs are stored. A VDI is nothing but an abstracted storage space which acts as the hard-disk for VMs.
Xen storage repository supports IDE, SATA, SCSI and SAS drives when locally connected, apart from iSCSI, NFS, SAS and fiber channel in case of a remote storage.
Steps to create an SR in a Xenserver.
1. SSH to the Xen server as root.
2. Find the disk ID of the new device using the following commands:
3. Find out the 'host-uuid' in the Xen server using the following command.
4. Create a Storage Repository (SR):
Read more http://blogmee.info/index.php/adding-a-hard-drive-to-citrix-xen-server/
Intro to SSL
What is SSL?
Secure Sockets Layer (SSL) is a technology which encrypts traffic between the client application and the server application involved in the conversation. This encryption is accomplished by making use of a public key/private key system using an SSL certificate.
The SSL certificate contains the server’s public key, dates for which the certificate is valid, a hostname for which the certificate is valid and a signature from the Certification Authority which issued it. With this information and some protocol information exchanged during the beginning of a session the client can be reasonably certain that the server is the one to which it is intending to talk.
He said what?
As with everything else in Information Technology SSL certificates have their own terminology. Here is a small glossary for some of the terms you will encounter while dealing with SSL certificates.
Bit size: Encryption keys are measured by their size in bits. For example 512 bit, 1024 bit, 2048 bit. Generally a longer key is going to be safer but probably slower to use. At this time the minimum size for the keys used in SSL certificates is 1024 bit, though the Extended Validation certificates require 2048 bit.
Certificate Chain: SSL certificates are not generally used alone. In most implementations you will actually be dealing with a certificate chain. For example:
Root > intermediate1 > server cert.
> Intermediate2 > server2 cert
In this example your server certificate is signed by the intermediate certificate which is in turn signed by the root certificate. Chaining in this fashion can make SSL more secure because it means that the root certificate is not used (and thus exposed to risk) so often. If intermediate1 was compromised then server cert could be in danger but server2 cert would be fine because they are part of different chains.
Certificate Signing Request: the CSR is a document you generate on the server which contains information that the Certification Authority uses to create your actual certificate.
Common Name: the Common Name (CN) is the hostname for which the certificate is valid (for example, www.domain.com). It should be noted thatwww.domain.com, smtp.domain.com and mail.domain.com are three completely different hostnames and the same SSL certificate is not valid for all three of them (unless you are using a wildcard certificate but at this time we do not offer those).
Private/Public Key: SSL makes use of a technique called public key cryptography. In this form of crypto you have two keys, the public and the private. The public key is distributed far and wide. No one sees your private key. People who wish to communicate securely with you encrypt their communication using YOUR public key. Public key cryptography is based upon the assertion that bits encrypted with a given public key can only be decrypted using the corresponding private key and vice versa.
Root certificate: The SSL root certificates are certificates which have signed themselves and which have been presented to the world by their respective Certification Authorities as the top of their chain. You will find root certificates for the major players already installed in the certificate store for your web browser. This allows your browser to trust those certificates and forms the beginnings of the chain of trust leading ultimately to the certificate you install on your server.
Signature: SSL certificates have a digital signature placed upon them by the Certification Authority. It is this signature which, when traced back to a trusted root certificate, confirms the authenticity of the certificate.
Why use SSL?
Planning Ahead for SSL
You’ve read the arguments for SSL and you’ve decided an SSL certificate is right for you. Now what? Well, the “now what” is the purpose for this article. Getting started with SSL requires a bit of planning before you make the first move.
1) Where will you get the certificate?
2) What kind of certificate will be used?
3) Key length and certificate duration
4) What Common Name will you be protecting with the certificate?
5) The socket rule
I’ll conclude with information on ordering SSL certificates here at SoftLayer.
Where to get the certificate
SSL certificates can be obtained internally in your organization or from a Certification Authority. The difference is one of audience. If your audience is a captive group under your control such as employees using an Intranet site you could do a Self-Signed certificate and have each employee install it in their browser. You could also setup a local Certification Authority of your own to generate certificates for use in your organization.
If your audience is a larger, more diverse group you most likely are not going to be able to mandate that they install your home-rolled certificate. Without doing the installation your visitors will get a warning saying that the locally created certificate is not valid since their browser will not able to validate the signature on it. This is where the Certificate Authorities like Verisign, RapidSSL, Thawte and so forth come into play. Modern web browsers are configured out of the box to trust root certificates issued by the big players in SSL. This trust point gives the browser a way to validate the signature on certificates issued by those organizations.
The remainder of this document is going to assume you’re going with a certificate from one of the major Certificate Authorities. Further I will assume, since this is the SoftLayer KnowledgeLayer, that you will be acquiring this certificate through SoftLayer.
Kinds of certificates
The first thing to decide when preparing to order a certificate from SoftLayer is what level of SSL certificate do you need? The Domain Validated certificates are available quickly and with a minimum of hassle. The Organization and Extended Validation certificates require more time (2 to 3 days up to a week) while our vendor does their probing to verify that your organization exists and that the person making the request for the certificate is actually authorized to make such a request.
Key length / Certificate duration
Having decided between DV, OV and EV your next decisions are to decide the length of the keys for the certificate and the length of time the certificate will be valid. Generally your options for key size are 1024 bit and 2048 bit. For Extended Validation you have to use 2048 bit. Longer is considered safer but shorter is faster. If in doubt, I’d say 2048. There is also the question of certificate duration. We offer one year and two year certificates. I tend to do my certificates in one year increments if that helps provide any guidance.
Common Name
The Common Name used in the certificate is the hostname for the website involved. The hostname the browser is trying to reach and the Common Name of the certificate have to match or browsers will toss a warning. If your site is web1.mydomain.com then you should make that your Common Name. What if you also use images.mydomain.com? Well, in that case you’re looking for either a wildcard certificate (which we do not offer) or setting up multiple certificates. If you choose wrongly in the setting of the Common Name there are potentially steps that can be taken to amend a certificate order or to revoke and re-issue with the correct Common Name. Those will be covered in a later article.
The socket rule
Because of the way the SSL protocol works at this time there is a limit of one certificate per socket. A socket is an IP address and port combination, such as 1.2.3.4:443. 1.2.3.4:444 would be a different socket. For applications like SMTP/POP3 or FTP this doesn’t particularly matter. It matters a great deal for HTTP because HTTP has for years had the concept of virtual hosting.
Virtual hosting is the method by which you can host 20, 30, 100 websites on one IP address. This works because modern browsers pass as part of their request a field called the host header. This field looks like “Host: web1.mydomain.com” and tells the web server which site you’re trying to hit among all the sites configured for whichever IP address to which you connect. In the case of HTTPS (HTTP over SSL) the web server has to select the SSL certificate to send to the client prior to seeing the host header and so for a given socket, there can be only one certificate.
The solution is that you assign each SSL enabled website to its own socket. You can do this by varying the IP address or varying the port. As a general rule you are going to want to do it by varying the IP address. If you change the port from 443/tcp then users will be required to include the port number in their URL like https://web1.mydomain.com:444 and this is going to create headaches for them and for you. Additional IP addresses can be acquired from the SoftLayer Sales department for a small monthly fee.
Now that you’ve considered some of the necessary decisions to be made the process of ordering an SSL certificate is broken into the following steps.
1. Generating the CSR
You generate the Certificate Signing Request by using software on the web server. For UNIX systems you will likely use the OpenSSL package. For Windows there is a wizard which is accessed from the Directory Security tab of the website properties in IIS Manager. If you are using a control panel, refer to specific information for that control panel.
In the process of generating the CSR you will create a private key. Do not lose, delete or share the private key. It is to be kept private on the web server. Some CSR generation utilities also give you the ability to create a passphrase for the private key. You probably don’t want to do this unless you plan to logon to the server anytime the web server software is restarted. Also do not apply a challenge phrase to the CSR.
2. Order the certificate
As with other things at SoftLayer you order certificates via the management portal. In the portal, go to Security > SSL Certificates to place an order. You’ll be walked through selecting the type and duration of certificate, submitting the text of the CSR, filling out some additional details and then confirming payment.
3. Install and test
Once the ordering and validating process is complete you receive an e-mail from the Certificate Authority which includes your certificate as well as any necessary Intermediate certificates. The method for installation of these will depend on the software you are using but the end result should be the same. You should, when done, be able to visit https://host.yourdomain.com and see your content while also seeing the SSL padlock that browsers use to denote an encrypted session. If you get a warning of some kind then there will be steps that need to be taken. Support and future KnowledgeLayer articles will be able to help with this.
How create SSL http://blogmee.info/index.php/how-to-create-a-self-signed-ssl-certificate/
Urchin Web Analytics Software
Urchin is a full featured website analysis tool designed to provide reporting on how visitors arrive at your site, which pages are visited, what they do while they are there and how often they return. Other valuable benefits include trending of online marketing campaigns, impact of website content and e-commerce initiatives.
System Requirements:
Urchin itself requires 700 Megabytes of free space and additional space over time for the database and log storage. Urchin recommends on a monthly basis 10 Gigabytes of new storage for every one million page views.
A MySQL/PostGres database is required for Urchin to work.
It is strongly recommend that Urchin is run on a stand alone server and not in a shared environment.
Currently Supported Operating Systems:
- Linux 2.4
- Linux 2.6 (RedHat/CentOS 4, 5)
- Debian
- FreeBSD 6.X i386
- Windows 2003
- Windows 2008
Improvements over Urchin 5:
- Improved translation in all languages
- Configuration database now uses MySQL and PostreSQL
- Improved clustering capability
- Security enhancements to the Apache webserver
- Improved scheduler
- E-commerce and campaign tracking are included
- More robust log processing engine
- User interface is presented in Flash
- Cross-segmentation that allows you to view metrics based on referring source, keyword, country, city, user agent, and more.
Please feel free to open a support ticket if you run into problems with the installation or configuration of this product.
Urchin can be ordered through the SoftLayer Portal under Software -> Urchin.
Installing Urchin on Linux
Read more http://blogmee.info/index.php/urchin-web-analytics-software/
DoS: looking at open connections
RedHat: netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n
BSD: netstat -na |awk '{print $5}' |cut -d "." -f1,2,3,4 |sort |uniq -c |sort -n
You can also check for connections by running the following command.
netstat -plan | grep :80 | awk '{print $4 }' | sort -n | uniq -c | sort
These are few step to be taken when you feel the server is under attack:
-------------------------------------------------------------------------------
Step 1: Check the load using the command "w".
Step 2: Check which service is utilizing maximum CPU by "nice top".
Step 3: Check which IP is taking maximum connection by netstat -anpl|grep :80|awk {'print $5'}|cut -d":" -f1|sort|uniq -c|sort -n
Step 4: Then block the IP using firewall (APF or iptables "apf -d < IP>" )
-------------------------------------------------------------------------------
You can also implement security features in your server like:
1) Install apache modules like mod_dosevasive and mod_security in your server.
2) Configure APF and IPTABLES to reduce the DDOS
3) Basic server securing steps :
===============================
http://www.linuxdevcenter.com/pub/a/linux/2006/03/23/secure-your-server.html?page=1
===============================
4) Configure sysctl parameters in your server to drop attacks.
You can block the IP which is attacking your server using Ipsec from command prompt.
=========
>> netsh ipsec static add filterlist name=myfilterlist
>> netsh ipsec static add filter filterlist=myfilterlist srcaddr=a.b.c.d dstaddr=Me
>> netsh ipsec static add filteraction name=myaction action=block
>> netsh ipsec static add policy name=mypolicy assign=yes
>> netsh ipsec static add rule name=myrule policy=mypolicy filterlist=myfilterlist filteraction=myaction
Adding IPv6 to FreeBSD systems
In order to bind IPv6 on a FreeBSD 6.X or 7.X system that was deployed and imaged prior to having an IPv6 subnet under your account, you will need to add the following to your /etc/rc.conf file
This configuration shows that em1 is the network interface you want to use. It also shows that 2607:f0d0:2001::5 is the address that you want assigned with a prefix length of /64, and that the default routers is 2607:f0d0:2001::1
#ipv6
ipv6_enable="YES"
ipv6_network_interfaces="em1"
ipv6_ifconfig_em1="2607:f0d0:2001::5 prefixlen 64"
ipv6_defaultrouter="2607:f0d0:2001::1"
Once this has been edited and saved, you can reboot your machine and it will bring up the interface.
If you want to have secondary IPv6 addresses on the FreeBSD machine you can add the following lines to your /etc/rc.conf under your ipv6 configuration section.
ipv6_ifconfig_em1_alias0="2607:f0d0:2001::7 prefixlen 64"
ipv6_ifconfig_em1_alias1="2607:f0d0:2001::6 prefixlen 64"
ipv6_ifconfig_em1_alias2="2607:f0d0:2001::8 prefixlen 64"
So my ipv6 section on this freebsd box looks like this.
fbsd-ipv6-test# cat /etc/rc.conf | grep ipv6
#ipv6
ipv6_enable="YES"
ipv6_network_interfaces="em1"
ipv6_ifconfig_em1="2607:f0d0:2001::5 prefixlen 64"
ipv6_ifconfig_em1_alias0="2607:f0d0:2001::7 prefixlen 64"
ipv6_ifconfig_em1_alias1="2607:f0d0:2001::6 prefixlen 64"
ipv6_ifconfig_em1_alias2="2607:f0d0:2001::8 prefixlen 64"
ipv6_defaultrouter="2607:f0d0:2001::1"
Read more http://blogmee.info/index.php/adding-ipv6-to-freebsd-systems/
Adding IPv6 to Ubuntu systems
#IPV6 configuration
iface eth1 inet6 static
pre-up modprobe ipv6
address 2607:f0d0:2001:0000:0000:0000:0000:0010
netmask 64
gateway 2607:f0d0:2001:0000:0000:0000:0000:0001
The first line tells the system which interface to use IPv6 on.
The second line tells the system to load the module for IPv6.
The third line gives the IPv6 address.
The fourth line defines the netmask for the IPv6 subnet.
The fifth line defines the default gateway for the IPv6 subnet.
Read more http://blogmee.info/index.php/adding-ipv6-to-ubuntu-systems/
Adding secondary IPs to LINUX
On Redhat/Centos/Fedora
1. Determine what existing range files exist:
# cd /etc/sysconfig/network-scripts/
# ls ifcfg-eth1-range*
You will see at least one file, possibly several. Find the highest number following the "range" and add one to it. This will be the new range number.
For example, if you see ifcfg-eth1-range0 and ifcfg-eth1-range1, your new range number will be "3".
2. Determine the next available interface number (clone number).
# ifconfig | grep eth1
You will see a list of interfaces that looks like this
eth1 Link encap:Ethernet HWaddr 00:08:74:A3:29:70
eth1:0 Link encap:Ethernet HWaddr 00:08:74:A3:29:70
eth1:1 Link encap:Ethernet HWaddr 00:08:74:A3:29:70
.
.
eth1:8 Link encap:Ethernet HWaddr 00:08:74:A3:29:70
Find the highest number after the "eth1:". Add one to it and this your new clone number. In this case it would be 9.
3. Create a range file for the new range number. (for this example, we will use range3)
# vi ifcfg-eth1-range3
4. Write the following lines to the range file. (replace the dummy ip information with your desired ip range and the CLONENUM_START value with the one calculated above)
IPADDR_START='123.0.0.1'
IPADDR_END='123.0.0.10'
CLONENUM_START='9'
5. Write and quit the range file, and restart your network.
# /etc/init.d/network restart
6. Your new ips should now be visible by running:
# ifconfig
On Debian/Ubuntu
Read more http://blogmee.info/index.php/adding-secondary-ips-to-redhatcentos/
Tuesday, July 12, 2011
Open Source Data Leak Prevention
In network security there are many challenges. In any business that deals with any sort of protected information (like healthcare) the challenges can be even greater.
One of the largest problems I see that is not being addressed adequately is hospitals and physicians sending personal health information in plain emails. It doesn't matter that HIPAA has specified that you don't do this for years. It doesn't matter that every IT and security manager in the business knows you don't do this. It doesn't even matter that the government has placed potential large fines on businesses that violate this. They still do it.
Enter MyDLP. MyDLP is a "Data Leak Prevention" software that is open source and licensed under the GPLv3.
MyDLP is very easy to install on an existing Ubuntu server, and they also provide an appliance installation image and a virtual image for download. Their website claims you can be up and running in under 30 minutes and it really is pretty darn easy.
Out of the box today, MyDLP will allow you to find and quarantine documents and emails containing SSN's, credit card numbers and international bank numbers. The advanced version that is due to be released shortly adds the ability to do custom regular expression based filters among other things.
The software is still in heavy development and features are being added every week. The developer was kind enough to give me access to the advanced version in development ahead of time for testing. The standard version is simple enough to be deployed by the most novice of network administrators. The advanced version gives you full power to customize your filtering methods to your heart's desire, but the documentation is still a bit thin and the advanced UI is a bit complicated for the average user. Really though, if you are a network administrator who has to deal with these things on a daily basis, you should be able to understand the advanced interface pretty easily. In just a few hours I was able to integrate MyDLP into my email server and set up custom filters to keep our customers from mistakenly sending us personally identifiable health information. It can even look into attached files and filter spreadsheets, documents and zip file contents as well.
There's a Windows client you can install that integrates with the server to prevent users from moving any protected information onto removable devices or network shares. Unfortunately there's no Linux client as of yet. There are also features to integrate it with a web proxy to filter incoming and outbound web traffic.
All-in-all, despite it's early development status I'd have to say this open source free application can certainly give any commercial based DLP system a run for it's money
From blogmee.info
Simple Changes To Secure an Ubuntu Deskop
In Ubuntu Desktop, you can deploy custom Gnome settings that override the defaults by dropping an XML file at:
/etc/gconf/gconf.xml.mandatory/%gconf-tree.xml
I use Puppet to deploy these settings to all of my Linux desktops. If you're from the Windows world, this is like using group policy, but with much more granular control.
Here's a sample of a few things you should change:
- Disable autorun - yes, there ARE nasty things you can do to Linux with an autorun USB stick, despite the Linux Fanboi's who may say otherwise. I've seen it.
- Disable the User List at Logon - You should already know who you are before you go to log in
- Enforce a screensaver lock - Make the desktops automatically lock to screensaver when left alone
- <?xml version="1.0"?>
- <gconf>
- <dir name="apps">
- <dir name="nautilus">
- <dir name="preferences">
- <entry name="media_automount_open" mtime="1287339134" type="bool" value="false"/>
- <entry name="media_autorun_never" mtime="1287339134" type="bool" value="true"/>
- </dir>
- </dir>
- <dir name="gdm">
- <dir name="simple-greeter">
- <entry name="disable_user_list" mtime="1287339134" type="bool" value="true"/>
- </dir>
- </dir>
- <dir name="gnome-screensaver">
- <entry name="idle_delay" mtime="1253741251" type="int" value="5"/>
- <entry name="idle_activation_enabled" mtime="1253741234" type="bool" value="true"/>
- <entry name="lock_enabled" mtime="1253741201" type="bool" value="true"/>
- </dir>
- </dir>
- </gconf>
Because these settings are "Mandatory" the user can't override them with one exception - the user will still be able to change the screensaver timeout. This appears to be a bug in Gnome or Gnome Screensaver. They can't disable the lock, but they can push it to as far out as two hours.
Other changes might include:
- disabling USB storage devices entirely
- installing an iptables based default firewall
- requiring SSH encryption keys when logging in remotely instead of passwords
- Encrypting home directories
... and a lot more. I think network admins typically think of the big things and miss little things like forcing the screensaver to lock when left alone.
Thunderbird 5 in Ubuntu
In case you are not aware, Thunderbird 5 is now released and is available for download. This latest release contains many bugs fixed and comes with extra features like new addons manager, tab dragging and reordering and enhanced account manager. For Ubuntu users, you can either download the tar file, unzip it and run the executable file or use a PPA and install it via the Ubuntu Software Center. The latter method is preferable as it allows you to receive regular update and is able to better integrate with the system.
Installation
To install Thunderbird 5 via PPA, open a terminal and type the following:
$ sudo add-apt-repository ppa:mozillateam/thunderbird-stable
$ sudo apt-get update
$ sudo apt-get install thunderbird
Done. Go to “Applications -> Internet -> Thunderbird ” to run the application.
Integrate Thunderbird to the Messaging menu
For the Thunderbird to appear in the Messaging menu, all you have to do is to install the “Ubuntu Unity Messaging Menu” extension.
In Thunderbird, go to “Tools -> Add-ons“. Search for “Messaging Menu”. Install the first extension that appears in the screen.
[singlepic id=36 w=326 h=334 float=center]
Restart Thunderbird after the installation. The Thunderbird entry should appear in the messaging menu now.
[singlepic id=37 w=300 h=333 float=center]
Replacing Evolution
If you are using the calendar feature in Evolution and you want to migrate to Thunderbird, you might be disappointed to find that Thunderbird does not come with a Calendar feature by default. You can, however, install the several extensions to implement the calendar feature.
Lightning is the most popular (and most comprehensive) Calendar extension for Thunderbird. It is almost a “must-install” for every Thunderbird user.
Alternatively, if you are a Google Calendar user, you can also install the Google Calendar Tab extension that loads your Google Calendar in a new tab.
If you are using IMAP to check your email accounts, there is no need to do any email migration from Evolution. However, for those using the POP protocol, follow the instructions here to migrate all your settings from Evolution.
Lastly, enjoy your Thunderbird.
Note: Thunderbird could be the default email client for Ubuntu Oneiric. It will be good if you can familiarize with it now.
Monday, July 11, 2011
Limit Bandwidth per vHost in Apache2 ( Ubuntu, Debian )
To avoid this, I start to ask my friends to use a download manager to limit their download speed... but it's somewhat far too tricky for a lot of them...
This HowTo somehow is Debian and Ubuntu specific, although the configuration of the module will work the same for all *nix distributions. It will illustrate how to limit bandwidth on a vHost basis using the Bandwidth Mod by Ivan Barrera written for the Summer Of Code event. First of all you will have to install the libapache2-mod-bw package on your system.
Here is a quick way to limit the bandwidth used by Apache :
Install & enable (on Ubuntu or Debian):
Read more http://blogmee.info/index.php/limit-bandwidth-per-vhost-in-apache2-ubuntu-debian/
How Do Limit Connections Per Single IP ?
*BSD PF Firewall Example - Limit Connections Per Single IP
Linux Netfilter (Iptables) Examples To Limit Connections
Read more http://blogmee.info/index.php/how-do-limit-connections-per-single-ip/
Limit Bandwidth Usage in Lighttpd
connection.kbytes-per-second:
limit the throughput for each single connection to the given limit in kbyte/s
default: 0 (no limit)
server.kbytes-per-second:
limit the throughput for all connections to the given limit in kbyte/s
if you want to specify a limit for a special virtual server use:
$HTTP["host"] == "www.example.org" {
server.kbytes-per-second = 128
}
which will override the default for this host.
default: 0 (no limit)
Additional Notes
Keep in mind that a limit below 32kb/s might actually limit the traffic to 32kb/s. This is caused by the size of the TCP send buffer.
More info
Create virtual host - subdomain in lighttpd
We'll configure Lighttpd for name-based virtual hosting :
Let us say your setup is as follows
Public IP address: 111.212.8.8
Domain names: domain1.com and domain2.net
Default Document Root: /home/lighttpd/default/http
Default Document Root for domain1.com: /home/lighttpd/domain1.com/http
Default Document Root for domain2.net: /home/lighttpd/domain2.net/http
First, create required directories:
Create domain1.com virtual host configuration
Create domain2.net virtual host configuration
Read more http://blogmee.info/index.php/create-virtual-host-subdomain-in-lighttpd/
Tuesday, July 5, 2011
Create An Android App For Your (Any) Website
Have you been delaying creating mobile app for your website because you have not enough budget? Or as a smart mobile device user, are you frustrated that your favorite website doesn’t come with a mobile app that you can read and access the content every time, anytime?
That is about to change with AppYet. With AppYet, you no longer have to worry about the development fee for a mobile app, or feel frustrated about the lack of mobile support for your favorite sites, because you can now create your own mobile app in minutes.
AppYet is an online app that allows you to grab a RSS feed and turn it into an Android app. It is easy to use, even for people with completely zero coding knowledge. It is fast too. All it takes is just a few clicks here and there and your app is then delivered to your mailbox in a short moment.
Creating your own Android Apps
Few things to note about using AppYet
Read more http://blogmee.info/index.php/create-an-android-app-for-your-any-website/
The Beginner Guide to Writing Linux Shell Scripts
For starters – let’s clarify that headline. Linux has more than one possible shell, and scripting any of them is a subject that can easily pack a full book. What we’re going to be doing is covering the basic elements of a bash script. If you don’t know what shell you’re using, it’s probably bash. The process will be familiar to anyone who’s worked with DOS’s bat files, it’s essentially the same concept. You just put a series of commands into a text file and run it. The difference comes from the fact that bash scripts can do a LOT more than batch files. In fact, bash scripting isn’t all that far from a full-fledged language like Python. Today we’ll be covering a few basics like input, output, arguments and variables.
Note: If we want to get really technical, bash is not a Linux-only shell. Much (though possibly not all) of the following would apply to any UNIX-type system, including Mac OSX and the BSDs.
Hello World
It’s tradition to begin a new “language” by creating a simple script to output the words “Hello World!”. That’s easy enough, just open your favorite text editor and enter the following:
#!/bin/bash
echo Hello World!
With only two lines, it couldn’t be a whole lot simpler, but that first line, #!/bin/bash, may not be immediately obvious. The first two characters (often called a hashbang) are a special signal. It tells Linux that this script should be run through the /bin/bash shell, as opposed to the C shell or Korn shell or anything else you might have installed. Without it, there’s no easy way for Linux to tell exactly what type of shell script this is. A Python script, for example, would likely start with something like #!/usr/bin/python.
After that is just the echo statement, which prints the words after it to the terminal (technically, to standard output).
Running Your Script
As is often the case with Linux, there are multiple ways to do this job. The most basic way would be to call bash manually and feed it the script file, as in
#Filename can be anything, .sh is a common practice for shell scripts.
bash myscript.sh
Clever readers may be thinking “But wait, didn’t we put that hashbang thing in so it would know to use bash? Why did I have to run bash manually?” and the answer is “You didn’t“. At least, you wouldn’t have if we had taken a moment to make the script executable on its own.
In the previous example, we launched bash and sent it the script. Now we’ll save ourselves some future time by making the script executable so we dont need to run bash manually. That’s as easy as a single command.
# chmod +x myscript.sh
And now it can be run with the filename directly
# ./myscript.sh
Variables and Arguments
Variables in bash can be a little more confusing than some other scripting languages, partly because they sometimes need to be prefaced with a $ character and sometimes not – depending on what you’re doing. Take the following example.
PATH=$PATH:/home/josh/scripts
We refer to the same variable, PATH, two times. Once there’s no $, but the other time there is. There are a few ways that you can remember when a $ is appropriate, but this author uses a “talking” metaphor. If I’m talking TO the variable (such as assigning it a new value) I call it by the short name, in this case PATH. If I’m talking ABOUT a variable (such as getting its current value) it gets a more formal title ($PATH). The precise reasoning and inner workings of this design are beyond the scope of this guide, so just try to remember that you need to include a $ if you’re trying to fetch the information in a variable.
Now we’re going to use a variable in our script. Change the second line to look like the following:
#!/bin/bash
echo Hello $1!
Bash auto-assigns certain variables for you, including a few such as $1, $2 etc which hold each of the arguments passed to the script. Variables can be reassigned and renamed any way you wish, so you could rewrite the previous script as
#!/bin/bash
firstname=$1
lastname=$2
echo Hello $firstname $lastname!
As you can see, there are no $ signs when assigning the value to the variable, but you do need them when pulling the info out.
Conditionals and Loops
No script could get very far without the ability to analyse or loop through data. The most common method of determining a course of action is to use the if statement. It works much like you’d expect – IF something THEN do stuff ELSE do something different. This example compares the string of characters that we stored in the variable firstname
and compares it to some hardcoded text. If they match, it prints special output. Otherwise, it continues as normal.
#!/bin/bash
firstname=$1
lastname=$2
if [ "$firstname" == "Josh" ]
then
echo "What a great name"
else
echo Hello $firstname $lastname!
fi
Finally, the next core component is bash’s ability to loop over data. The normal looping mechanisms for bash are FOR, WHILE, and UNTIL. We’ll start with while, as it’s the simplest.
#!/bin/bash
counter=0
#While the counter is less than 10, keep looping
while [ $counter -lt 50 ]; do
echo $counter
let counter=counter+1
done
That example creates a counter variable, begins a while loop, and continues looping (and adding one to the counter) until it reaches the limit, in this case 50. Anything after thedone statement will execute once the loop is complete.
UNTIL operates similarly, but as the reverse of WHILE. A while loop will continue as long as its expression is true (counter less than 50). The until loop takes the opposite approach, and would be written as :
until [ $counter -gt 50 ]; do
In this example, “while less than 50″ and “until greater than 50″ will have nearly identical results (the difference being that one will include the number 50 itself, and the other will not. Try it out for yourself to see which one, and why.)
Conclusion
As stated above, it would take a lot more than a single Getting Started article to fully demonstrate the power of bash scripting. The pieces shown here can be seen as the core components of how bash operates, and should suffice to show you the basic principles behind shell scripting in Linux. If you really want to get into the guts and start making some great scripts, check out GNU’s official bash reference guide here. Happy scripting!
How To Share Files with Your Android Device, Without Wires
Using a USB cable to move files between your Android device and your computer is … well, it’s so early 2000s, isn’t it? With wifi, if not ubiquitous, then widespread, there’s no reason why you should need to plug your device into your computer to do anything.
As long as you have some wifi available and the right app, you can turn your Android device into a mini web server, which lets you share files between your device and your computer. Let’s take a look at how to do just that with an app called Dooblou Wifi File Explorer.
Getting Going
You can download and install Dooblou Wifi File Explorer from the Android Market. There are two version: free, and Pro which costs just under $2 (USD). Using the free version, you can’t download, copy, or delete files. This article will look at the Pro version.
Once you’ve installed the app, turn on wifi on your Android device. You can do this using the widget on a home screen. If you’re using a wifi hotspot that you’ve visited before, then your device will automatically connect to it. If not, then manually connect to the hotspot by tapping Settings -> Wireless & Networks -> Wi-Fi Setting and then tapping the name of the hotspot to which you want to connect.
Read more http://blogmee.info/index.php/how-to-share-files-with-your-android-device-without-wires/
Monday, July 4, 2011
BixData - Network and systems monitor
BixData can monitor everything that your network and systems depend on. After a simple installation of BixServer and BixDesktop you can monitor network devices, HTTP, Web Services, Mail Servers, File Systems and Applications. Be notified instantly when a service or device goes down or escalate notification based on downtime.
Manage FreeBSD, Linux, OS X, and Windows; all with one single software solution.
Reporting capabilities range from overview reports of resource and usage statistics to detailed reports that include the inventory of servers, hard disks and even serial numbers and firmware revisions. Uptime and availability graphs are automatically created for any service check. Numerous other graphs are also included for CPU, Memory, Network traffic and Disk Usage.
Through included data sources and plug-ins, BixData can collect data from any server on any platform and store directly into a number of supported SQL Servers. BixData uses an open specification for data storage. Open data exchange is very important to integrate with different environments and software systems, especially in IT. All data schemas and data storage in BixData is in XML format that can easily be integrated into 3rd party applications or accessed directly from the underlying SQL database servers.
The community Edition supports up to 30 servers.
Read more http://blogmee.info/index.php/bixdata-network-and-systems-monitor/
Virtual Machine Manager
The "Virtual Machine Manager" application (virt-manager for short package name) is a desktop user interface for managing virtual machines. It presents a summary view of running domains, their live performance & resource utilization statistics. The detailed view graphs performance & utilization over time. Wizards enable the creation of new domains, and configuration & adjustment of a domain's resource allocation & virtual hardware. An embedded VNC client viewer presents a full graphical console to the guest domain.
Read more http://blogmee.info/index.php/virtual-machine-manager/
Server virtualization
Virtualization is the latest buzz word. You may wonder computers are getting cheaper every day, why should I care and why should I use virtualization? Virtualization is a broad term that refers to the abstraction of computer resources such as:
- Platform Virtualization
- Resource Virtualization
- Storage Virtualization
- Network Virtualization
- Desktop Virtualization
This article describes why you need virtualization and list commonly used FOSS and proprietary Linux virtualization software.
Why should I use virtualization?
Read more http://blogmee.info/index.php/server-virtualization/
FreeNAS 8.0 Simplifies Storage
The FreeNAS distribution is tailor-made for installation in a small office environment. It is an extremely low-resource network storage system that you can administer through a Web browser, but it supports high-end features like automatic backups, replication, LDAP or Active Directory authentication, and seamless file-sharing over NFS, CIFS, AFP, and even FTP and TFTP. The latest release — version 8.0 — is just a few weeks old, and it is the perfect time to take a look.
A Bird's-Eye View
For those new to FreeNAS, it is important to realize that the system is designed to be lean and mean by eliminating all other server functionality. That is, FreeNAS will give you high-end storage features, but it will not double as a Web server, authentication server, or any other piece of IT infrastructure. However, you can run FreeNAS on older or off-the-shelf PC hardware and attach far more storage per dollar than you would ever get in from a commercial storage appliance. By switching off all of the unnecessary server and OS components, the FreeNAS team manages to get incredible speed out of the operating system, and fit the entire image into a compact package — the latest release fits into 64 MB.
In fact, FreeNAS can run from non-volatile, compact flash storage, so you can save every byte of your hard disks for files. The core of the system is derived from the open sourceFreeBSD project, heavily customized with the NanoBSD embedded image creator. It can even be configured to run as a read-only system image, so that there is no danger of losing customizations in the event of a power loss.
The preferred filesystem for storage volumes is ZFS, a high-capacity filesystem with built-in support for snapshots, copy-on-write transactions, and logical volume management. It originated in Sun's OpenSolaris, but has since been ported to other operating systems. ZFS volumes can be fully managed from the Web admin interface, and FreeNAS can manage multiple file sharing protocols concurrently, for compatibility with Unix-like networks, Windows, and Mac OS X. For those not interested in ZFS, the older UFS filesystem is supported as well.
Hardware support is a given; any type of drive will work, from parallel ATA up through FireWire and iSCSI, in heterogeneous combinations. Hardware RAID controllers are also supported, as is software RAID. The current list of RAID levels includes 0, 1, 5, JBOD, 5+0, 5+1, 0+1, 1+0, and even RAID-Z. Disk encryption is also supported, and monitoring is available via SNMP, email reporting, and remote logging.
Installation and Setup
Read more http://blogmee.info/index.php/freenas-8-0-simplifies-storage/
Create Virtual Hosts with Apache
Setting up virtual hosts might seem like a big challenge, but it's not. In fact, you can set up a virtual host with just a few edits to Apache's configuration and by setting up additional directories for the documents. For this how to, I want to use an Apache installation on a Ubuntu server. Please be aware, the instructions for this may require modification if being done on a non-Debian distribution because of the way that Apache is packaged. However, the Apache directives should be standard across distributions and should work even if Apache isn't running on Linux.
Creating the Directory Structure
Before the configurations can be tackled, the directory structure for the virtual site must be created. I am going to be working with Apache as installed on a Ubuntu server, so the Apache document root will be
/var/www
. The directory structure for the new Web site can be created anywhere. Some create those directories in their home (~/
) directory, some create them in /usr/local/apache
, and other, various locations. For the sake of simplicity, I am going to illustrate setting the virtual host in the document root of Apache (in Ubuntu that would be /var/www
). By doing this, it will not be necessary to change ownership of the newly created directory or the parent directory housing the virtual host (since Apache must have access to the directories and files.)Read more http://blogmee.info/index.php/create-virtual-hosts-with-apache/