Saturday, August 13, 2011

Resource Management in Solaris 10


Solaris 10 resource mangement is a major step forward over what was available in Solaris 8 and 9. In Solaris 10, we can manage resources at a zone, project or task level
Projects are collections of tasks, which are collections of processes. A new task is started in a project when a new session is opened by a login, cron, newtask, setproject or su command. Each process belongs to only one task, and each task belongs to only one project.
When there is more than one policy in place for a particular object, the smallest container’s control is enforced first.
Projects are maintained via the /etc/project file. Changes to /etc/project become available for new tasks in a project. (prctl and rctladm are used to perform runtime changes.)
The fields in an /etc/project entry are:
  • projname: Name of the project.
  • projid: Unique numerical project identifier less than UID_MAX (2147483647).
  • comment: Project description.
  • user-list: Comma-separated list of users.
  • group-list: Comma-separated list of groups.
  • attributes: Semicolon-separated list of name-value pairs, such as resource controls, in a name[=value] format.
After a default Solaris 10 installation, /etc/project contains the following:
system:0::::(default project for system processes and daemons)
user.root:1::::(processes owned by the root user)
noproject:2::::(IP Quality of Service)
default:3::::(default assigned to every otherwise unassigned user)
group.staff:10::::(default used for unassigned users in the “staff” group)
Parameters are set by adding them to the last field of the project entry:
projectname:101::::project.max-lwps=(privileged,200,deny)

Management Commands


Privilege Levels


IPC Resource Controls


Other Resource Controls


Command Examples


Default Project


Read more 

Virtual Machine in Solaris 10


Zones are containers to segregate services so that they do not interfere with each other. One zone, the global zone, is the locus for system-wide administrative functions. Non-global zones are not able to interact with each other except through network interfaces. When using management commands that reference PIDs, only processes in the same zone will be visible from any non-global zone.
Zones requiring network connectivity have at least one dedicated IP address. Non-global zones cannot observe each other’s network traffic. Users in the global zone, however, are able to observe the functioning of processes in non-global zones. It is usually good practice to limit user access to the global zone to system administrators. Other processes and users should be assigned to a non-global zone.
Each zone is assigned a zone name and a unique numeric zone ID. The global zone always has the name “global” and ID “0.” A node name is also assigned to each zone, including global. The node names are independent of the zone names.
Each zone has a path to its root directory relative to the global zone’s root directory.
A non-global zone’s scheduling class is set to be the same as the system’s scheduling class. If a zone is assigned to a resource pool, its scheduling class can be controlled by controlling the pool’s scheduling class.
Non-global zones can have their own zone administrators. Their authority is limited to their home zone.
The separation of the environments allows for better security, since the security for each zone is independent. Separation also allows for the installation of environments with distinct profiles on the same hardware.
The virtualization of the environment makes it easier to duplicate an environment on different physical servers.
ZFS is supported in Solaris 10 zones from the 6/2006 release and after.

Zone Installation

Zone States
Zone Control Commands
Resource Management
Zone Components
Zonecfg Interactive Mode
Adding Resources
Zone Models


Read more

Service Management Facility – SMF


The Service Management Facility was introduced in Solaris 9 as an alternative way to manage services. In Solaris 10, SMF has been made the default way to manage most services. The SMF framework has significant advantages over the legacy SVR4 mechanisms, primarily in terms of service monitoring and integration with the Fault Management Facility.

Basic Commands

Service IdentifiersS
MF Service Starts
Maintenance
Scripts
SMF Profiles
Service Configuration Repository
Revert to a SnapshotBoot 
Troubleshooting


Read more


You can want to know:

Solaris Fault Management


The Solaris Fault Management Facility is designed to be integrated into the Service Management Facility to provide a self-healing capability to Solaris 10 systems.
The fmd daemon is responsible for monitoring several aspects of system health.
The fmadm config command shows the current configuration for fmd.
The Fault Manager logs can be viewed with fmdump -v and fmdump -e -v.
fmadm faulty will list any devices flagged as faulty.
fmstat shows statistics gathered by fmd.

Fault Management

With Solaris 10, Sun has implemented a daemon, fmd, to track and react to fault management. In addition to sending traditional syslog messages, the system sends binary telemetry events to fmd for correlation and analysis. Solaris 10 implements default fault management operations for several pieces of hardware in Sparc systems, including CPU, memory, and I/O bus events. Similar capabilities are being implemented for x64 systems.
Once the problem is defined, failing components may be offlined automatically without a system crash, or other corrective action may be taken by fmd. If a service dies as a result of the fault, the Service Management Facility (SMF) will attempt to restart it and any dependent processes.
The Fault Management Facility reports error messages in a well-defined and explicit format. Each error code is uniquely specified by a Universal Unique Identifier (UUID) related to a document on the Sun web site at http://www.sun.com/msg/ .