Chuyện của sys

DevOps Blog

Cài đặt OBIEE 10G trên Centos 5.x July 7, 2015

Giới thiệu: Hướng dẫn cài đặt OBIEE 10G trên Centos 5.x x86
Chuẩn bị:

Java SDK 1.5 – 1.7
Oracle client 11G R2 full
Source cài đặt: Oracle Business Intelligence (10.1.3) Media Pack for Linux x86

Tiến hành:
Tạo thư mục và user cài đặt:

groupadd oinstall
useradd obi -g oinstall
passwd obi
mkdir -p /usr/local/OracleBI
chown -R obi:oinstall /usr/local/OracleBI
chmod -R 775 /usr/local/OracleBI
mkdir -p /usr/local/OracleBIData
chmod -R 775 /usr/local/OracleBIData
chmod -R 775 /usr/local/OracleBI

Kiểm tra để chắc chắn là urandom và random tồn tại trong /dev/

cd /dev/
ls *random

vi /etc/security/limits.conf

obi             soft    nofile  1024
obi             hard    nofile  65536

vi .bash_profile

# File Descriptor Limit
ulimit -n 10240
# Java Home
JAVA_HOME=/usr/java/jdk1.7.0_80
export JAVA_HOME
# Local Binary
PATH=$PATH:$HOME/bin
# OBI Setup Script
PATH=$PATH:/usr/local/OracleBI/setup
export PATH
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/client_1
export LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0/client_1/lib:$LD_LIBRARY_PATH
export TNS_ADMIN=$ORACLE_HOME/network/admin
export PATH=/u01/app/oracle/product/11.2.0/client_1/bin:$PATH

Kiểm tra lại phần chuẩn bị bằng script :

chmod a+x UnixChk.sh
./UnixChk.sh -b /usr/local/OracleBI

SUCCESS!! - This machine is configured for Oracle BI EE 10.1.3.3.2

Chạy bằng root:

chmod -R 777 /usr/java/jdk1.7.0_80

Chạy setup bằng console:

cd biee_linux_x86_redhat_101342/Server/Oracle_Business_Intelligence/
./setup.sh -console

Điền các thông tin sau:

  • Installation Location: /usr/local/OracleBI
  • Data Location: /usr/local/OracleBIData
  • Installation Type: Basic
  • Set up type: Complete
  • JDK Location: /usr/java/jdk1.7.0_80
  • Administrator Credential : oc4jadmin/oc4jadmin
  • Language: English

Và hoàn thành cài đặt:

The InstallShield Wizard has successfully installed Oracle Business
Intelligence 10.1.3.4.1.

Khởi động server:

oc4j -start
run-sa.sh start
run-saw.sh start

Stop server:

run-saw.sh stop
run-sa.sh stop
oc4j -shutdown -port 23791 -password oc4jadmin

Khởi động cùng với OS:

vi /etc/rc.local
su obi -c ‘oc4j -start’
su obi -c ‘run-sa.sh start’
su obi -c ‘run-saw.sh start’

Kiểm tra hoạt động của server:

ps ax |grep sawserver
7271 ?        S      0:00 /bin/sh /usr/local/OracleBI/setup/sawserver.sh
7285 ?        Sl     0:46 /usr/local/OracleBI/web/bin/sawserver
20418 pts/0    S+     0:00 grep sawserver
ps ax |grep nqsserver
7059 ?        Sl     0:48 /usr/local/OracleBI/server/Bin/nqsserver -quiet
20474 pts/0    S+     0:00 grep nqsserver

Kiểm tra log file :

tail -f /usr/local/OracleBI/server/Log/NQServer.log
tail -f /usr/local/OracleBIData/web/log/sawserver.out.log
tail -f /usr/local/OracleBIData/web/log/javahost.out.log

Như vậy đã hoàn thành cài đặt OBIEE 10G.
login bằng link http://ip:9704/analytics/saw.dll?Dashboard
Tham khảo:
http://gerardnico.com/wiki/dat/obiee/LINUX_installation
http://gerardnico.com/wiki/web_server/oc4j/memory
No Comments on Cài đặt OBIEE 10G trên Centos 5.x

Cài đặt Oracle Client Full 11R2 trên Linux dùng Response File trên Terminal July 2, 2015

Mục tiêu: Cài đặt gói client full 11R2 ( thay vì bản Instant client) trên server CentOS 6.x 64bit.
Chuẩn bị:
–  Download bản cài đặt trên trang chủ Oralce ( Down bản 64bit) ( linux.x64_11gR2_client.zip)
– Chuẩn bị OS 6.x 64bit đã cài đặt các gói cần thiết cho Oracle client.
Bắt đầu:
1. Cài đặt các gói sau:

yum -y install binutils compat-libstdc++-33 compat-libstdc++-33.i686 ksh elfutils-libelf elfutils-libelf-devel glibc glibc-common glibc-devel gcc gcc-c++ libaio libaio.i686 libaio-devel libaio-devel.i686 libgcc libstdc++ libstdc++.i686 libstdc++-devel libstdc++-devel.i686 make sysstat unixODBC unixODBC-devel
2. Thiết lập lại kernel ( phía client không cần thiết nhưng phải thỏa mãn 1 số yêu cầu tối thiểu)
Ví du:
vi /etc/sysctl.conf
#net.bridge.bridge-nf-call-ip6tables = 0
#net.bridge.bridge-nf-call-iptables = 0
#net.bridge.bridge-nf-call-arptables = 0
net.ipv4.ip_local_port_range = 9000 65500
fs.file-max = 6815744
kernel.shmall = 10523004
kernel.shmmax = 6465333657
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.core.rmem_default=262144
net.core.wmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_max=1048576
fs.aio-max-nr = 1048576
Chạy lệnh để áp dụng:
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_local_port_range = 9000 65500
fs.file-max = 65536
kernel.shmall = 10523004
kernel.shmmax = 6465333657
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
4. Tạo user và group để cài đặt
groupadd oinstall
groupadd dba
useradd -m -g oinstall -G dba oracle
echo “oracle:oracle” | chpasswd
id oracle
5. Tạo thư mục lưu trữ
mkdir -p /opt/oracle
chown oracle:oinstall /opt/oracle/
6. Chuyển sang user oracle
su – oracle
7. Chỉnh sửa file response để cài đặt.
unzip linudx.x64_11gR2_client.zip
Thư mục chứa source cài đặt : /home/oracle/client
cd response/
cp client_install.rsp client_install.rsp.orig
 vi client_install.rsp
oracle.install.responseFileVersion=/oracle/install/rspfmt_clientinstall_response_schema_v11_2_0
ORACLE_HOSTNAME=oracle.local.test
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/opt/oracle/OraInventory
SELECTED_LANGUAGES=en
ORACLE_HOME=/opt/oracle/product/11.2.0.1/client
oracle.install.client.installType=Administrator
8. Tiến hành cài đặt
./runInstaller -silent -responseFile ~/client/response/client_install.rsp
Có thể thêm -debug để thêm mode debug.
Sample Output
[WARNING] [INS-13014] Target environment do not meet some optional requirements.
CAUSE: Some of the optional prerequisites are not met. See logs for details. /opt/oracle/OraInventory/logs/installActions2015-07-02_12-13-40AM.log
ACTION: Identify the list of failed prerequisite checks from the log: /opt/oracle/OraInventory/logs/installActions2015-07-02_12-13-40AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The following configuration scripts need to be executed as the “root” user.
#!/bin/sh
#Root scripts to run
/opt/oracle/OraInventory/orainstRoot.sh
/opt/oracle/product/11.2.0.1/client/root.sh
To execute the configuration scripts:
1. Open a terminal window
2. Log in as “root”
3. Run the scripts
4. Return to this window and hit “Enter” key to continueSuccessfully Setup Software.
Một số warning do hệ thống không đáp ứng được requirements, nhưng có thể bỏ qua được.
Làm theo hướng dẫn và nhấn Enter.
/opt/oracle/OraInventory/orainstRoot.sh
/opt/oracle/product/11.2.0.1/client/root.sh
 Như vậy việc cài đặt đã hoàn thành, bạn có thể kiếm tra việc cài đặt bằng xem log được lưu trữ trong thư mục :/opt/oracle/OraInventory/logs
9. Set biến môi trường:
 vi ~/.bash_profile
export TMP=/tmp
export TMPDIR=/tmp
export ORACLE_BASE=/opt/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0.1/client
export PATH=$ORACLE_HOME/bin:$PATH
 10. Test thử cài đặt bằng SQLPLUS
sqlplus
SQL*Plus: Release 11.2.0.1.0 Production on Thu Jul 2 19:58:21 2015
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
Enter user-name:
Như vậy bạn đã hoàn thành được việc cài đặt Oracle client trên linux bằng cách dùng response file trong trường hợp không thể cài đặt bằng GUI.
No Comments on Cài đặt Oracle Client Full 11R2 trên Linux dùng Response File trên Terminal

QAD Enterprise Applications: Administering Connection Manager

Connection Manager
Connection Manager là công cụ bao gồm các  functions  và views giúp admin quản lý các kết nối telnet sử dụng để chạy các QAD .NET UI. Connection Manager được  hiển thị ban đầu với  các tùy chọn menu: Functions, Connections, và Users.
connect

Functions
Close Connection Manager
Đóng tất cả các kết nối đang hoạt động, tất cả các tiến trình đang chạy đều bị hủy.
Restart Connection Manager
Shutdown và khởi động lại Connection Manager.
Reset failed init count
Reset tất cả những kết nối lỗi.
Delete HTML cache
Xóa những thông tin cũ của màn hình bằng cách xóa cache.
Message of the day
Chọn tùy chọn này để thiết lập một thông điệp ngắn gọn mà hiển thị trong tiêu đề của trang chính mỗi khi người dùng đăng nhập.
Update configuration settings
Hiển thị các cấu hình để chỉnh sửa
Monitor Connections
Chọn Connections để xem những session đang hoạt động, mỗi kết nối sẽ có những trạng thái như sau:
Initializing
Session đang khởi động, chưa kết nối thành công(có màu đỏ).
Idle
Session  đang hoạt động, sẵn sang cho user kết nối(màu xanh).
Busy
Session đang được user sử dụng.
Pause
Đang chờ phản hồi từ phí user, ví dụ:  “press the spacebar to continue”.
Processing
Đang thực thi câu lện update DB và các record bị khóa(màu cam).
Force Disconnect
Thực thi ngắt kết nối bởi Admin.
Disconnected
Ngắt kết nối hoàn toàn
Monitor Users
Connection Manager’s Users hiển thị danh sách các user đang kết nối.
User ID
Để xem toàn bộ thông tin về: Status, ID, Process ID, User ID, Device, User IP, Maximum Connections, Program, and User Connected Time.
Refresh
Cập nhật.
Close
Đóng kết nối của user hiện thời khi cần thiết. (bị treo hoặc process lỗi).

No Comments on QAD Enterprise Applications: Administering Connection Manager
Categories: Cũ hơn

Cấu hình Load Balancing và Reverse Proxy sử dụng HAproxy và keepalived cho Web Server trên CentOS 6 June 29, 2015

Giới thiệu:
HAproxy là phần mềm cân bằng tải TCP/HTTP và là giải pháp reverse proxy mã nguồn mở phổ biến, thường được sử dụng kết hợp với keepalived đem lại 1 giải pháp high availability hữu hiệu với giá thành thấp hơn so với việc sử dụng các giải pháp cân bằng tải khác (dựa trên phần cứng).
Bài viết sẽ cung cấp 1 số các khái niệm có liên quan đến proxy, reverse proxy, các thuật toán cân bằng tải cho bộ cân bằng tải, HAproxy và các khái niệm liên quan, keepalived… Bài viết còn trình bày thêm hướng dẫn cấu hình giải pháp high availability cho hệ thống sử dụng HAproxy kết hợp keepalived trên CentOS 6
Các khái niệm:

  • Proxy

Proxy là 1 internet server làm nhiệm vụ chuyển tiếp, kiểm soát thông tin giữa client và server, proxy có 1 địa chỉ IP và 1 port cố định. Cách thức hoạt động: tất cả các yêu cầu từ client gửi đến server trước hết phải thông qua proxy, proxy kiếm tra xem yêu cầu nếu được phép sẽ gửi đến server và cũng tương tự cho server.
– Forward proxy: Là khái niệm proxy chúng ta dùng hàng ngày, nó là thiết bị đứng giữa 1 client và tất cả các server mà client đó muốn truy cập vào.
– Reverse proxy: là 1 proxy ngược, nó đứng giữa 1 server và tất cả các client và server mà server đó phục vụ, tương tự 1 nhà ga kiêm 1 trạm kiểm soát, các yêu cầu từ client gửi lên server bắt buộc phải ghé vào reverse proxy, tại đây yêu cầu sẽ được kiếm soát, lọc bỏ, và luân chuyển đến server. Ưu điểm của nó là khả năng quản lý tập trung, giúp chúng ta có thể kiếm soát mọi yêu cầu do client gửi lên server mà chúng ta cần bảo vệ.

  • Các kỹ thuật cân bằng tải

Các phần mềm cân bằng tải thường được cài đặt như 1 proxy, các kỹ thuật cần phải chú trọng đó là: kiểm tra trạng thái server (health checks), lựa chọn server tốt nhất để gửi yêu cầu và kỹ thuật duy trì kết nối của người dùng.
– Kiểm tra trạng thái server: có thể kiếm tra server bằng lệnh ping hoặc 1 phương pháp nào khác
– Lựa chọn server tốt nhất dựa trên các thuật toán cân bằng tải ( RR,…)
– Kỹ thuật Session Persistence: duy trì kết nối cho yêu cầu của người dùng trong suốt phiên làm việc, tất cả các yêu cầu của người dùng cần phải chuyển đến cùng 1 server, nếu server chết hoặc ngừng để bào trì , cần có cơ chế để chuyển session qua 1 server khác, cookie là phương pháp được sử dụng.

  • Các thuật toán cân bằng tải

– Thuật toán Round Robin (RR): phương pháp chia tải xoay vòng, hoạt động tốt với các server có khả năng xử lý tương tự nhau, điểm yếu của nó là 2 yêu cầu liên tục từ phía 1 người dùng có thể sẽ được gửi vào 2 server khác nhau, nên thường được cài đặt cùng với phương pháp duy trì session như sử dụng cookie.
– Thuật toán Weighted Round Robin (Ratio): tương tự như RR nhưng có quan tâm đến khả năng xử lý của các server, trong 1 chu kỳ 1 server có khả năng xử lý gấp đôi server khác sẽ nhận được gấp đôi số yêu cầu từ phía bộ cân bằng tải.
– Thuật toán Least Connections (LC): yêu cầu từ phía client sẽ được chuyển vào server có ít kết nối nhất trong hệ thống vào thời điểm đó.
Ngoài ra còn nhiều thuật toán khác được sử dụng.

  • HAproxy và các khái niệm liên quan

Haproxy là ứng dụng chạy độc lập, tất cả những gì người dùng cần phải làm là cài đặt HAproxy lên 1 máy server và thiết lập file cấu hình, trong đó ghi rõ địa chỉ cài đặt HAproxy và địa chỉ của các máy server. Mỗi server sẽ được cài đặt các thông số khác nhau về địa chỉ, cookie, trọng số… File cấu hình này cũng thiết lập các thông số về timeout của session, số lượng kết nối lớn nhất, cookie, thuật toán căn bằng tải, ACLs…

Các thuật ngữ cần quan tâm:
– ACL ( Access control list): Dùng để test 1 số điều kiện và thực hiện 1 hành động như lựa chọn máy chủ hoặc khóa 1 yêu cầu dựa trên kết quả kiếm tra. Dùng ACLs cho phép chuyển hướng lưu lượng mạng 1 cách linh động dựa trên nhiều tác nhân giống như pattern-matching và 1 số kết nối đến backend.
Ví dụ: acl url_static path_beg -i /static /images /javascript /stylesheets
Những url có đường dẫn bắt đầu bằng /static /images sẽ được gán cho url_static
– Backend : là tập hợp các server mà nhận các yêu cầu được chuyển hướng. Được định nghĩa trong phần backend trong file cấu hình với thông tin cơ bản như sau: thuật toán cân băng tải sử dụng và danh sách các máy chủ và port.
Một backend có thể chứa 1 hoặc nhiều máy chủ ở trong nó.
– Frontend: định nghĩa cách thức các yêu cầu sẽ được chuyển hướng đến backend, được định nghĩa trong phần frontend trong file cấu hình, định nghĩa bao gồm các thành phần: tập hợp các địa chỉ ip và cổng, ACLs, các quy tắc sử dụng backend (định nghĩa backend nào được sử dụng phụ thuộc vào điều kiện ACLs có khớp hay không hoặc 1 quy tắc default_backend để xử lý các trường hợp còn lại.
Ví dụ:
frontend main *:80 # toàn bộ ip port 80
acl url_static path_beg -i /static /images /javascript /stylesheets   # đường dẫn bắt đầu bằng
acl url_static path_end -i .jpg .gif .png .css .js     # đường dẫn kế thúc bằng
use_backend static if url_static # sẽ sử dụng backend static nếu như match ACL
default_backend app   # mặc định sẽ sử dụng backend defualt là app.

  • keepalived

là 1 phần mềm định tuyến được viết bằng C, cung cấp 1 công cụ đơn giản và mạnh mẽ cho việc cần bằng tải và HA cho hệ thống. Nói đơn giản hơn là keepalived dùng để cung cấp IP Failover cho 1 cluster. Cho phép 2 bộ cân bằng tải cài đặt cùng với nó hoạt động theo cơ chế active/backup.

Mô hình
topo
Trong đó:
GW1: server CentOS 6, đóng vai trò là Master
GW2: server CentOS6, đóng vai trò là Backup
APP1: server Ubuntu 14.04, chạy Apache
APP2: server Ubuntu 14.04, chạy Apache
(trong bài viết này không đề cập tới việc cài đặt và cấu hình Apache Web Server)
Card eth0 nối với public network, card th1 nối với private network
HAproxy được cài đặt trên cả 2 server GW1 và GW2, listen trên ip 192.168.1.200, đóng vai trò là reverse proxy.
ip 192.168.1.200: ip ảo được tạo ra từ GW1 và GW2 thông qua keepalived
IP list:

192.168.1.128 gw1
192.168.1.130 gw2
172.16.1.132 app1
172.16.1.133 app2
192.168.1.200 vip
Cài đặt haproxy và keepalived trên cả 2 server GW1 và GW2 bằng lệnh yum
yum -y install haproxy keepalived
Cấu hình keepalived
! Configuration File for keepalived
/etc/keepalived/keepalived.conf
#————————————————————————–#
# Global define để xác nhận thông tin notic qua email                      #
# ————————————————————————-#
global_defs {
   notification_email {
   }
   notification_email_from [email protected]
   smtp_server localhost
   smtp_connect_timeout 30
}
#————————————————————————–#
# Define Script để kiểm tra process haproxy                                #
# ————————————————————————-#
vrrp_script chk_haproxy {
  script “killall -0 haproxy” # check the haproxy process
  interval 200 # every 2 seconds
  weight 2 # add 2 points if OK
}
#————————————————————————–#
# VRRP                                                                     #
# ————————————————————————-#
vrrp_instance VI_1 {
    state MASTER            # MASTER ở gw1 , BACKUP ở gw2
    interface eth0          # interface để tạo ip ảo
    virtual_router_id 51
    priority 101            # độ ưu tiên, ở MASTER 101, ở BACKUP 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.200       # ip ảo
    }
    track_script {
        chk_haproxy         # chạy script
    }

}

Khởi động keepalived:
service keepalived start
Sau đó kiểm tra hoạt động của keeaplived trên 2 server GW1 và GW2 (*)
Cấu hình haproxy
#———————————————————————
# Global settings
#———————————————————————
global
log         127.0.0.1 local2
chroot      /var/lib/haproxy
pidfile     /var/run/haproxy.pid
maxconn     4000
user        haproxy
group       haproxy
daemon# turn on stats unix socket
stats socket /var/lib/haproxy/stats#———————————————————————
# common defaults that all the ‘listen’ and ‘backend’ sections will
# use if not designated in their block
#———————————————————————
defaults
mode                    http
log                     global
option                  httplog
option                  dontlognull
option http-server-close
option forwardfor       except 127.0.0.0/8
option                  redispatch
retries                 3
timeout http-request    10s
timeout queue           1m
timeout connect         10s
timeout client          1m
timeout server          1m
timeout http-keep-alive 10s
timeout check           10s
maxconn                 3000    # Number of maximum connections#———————————————————————
# main frontend which proxys to the backends
#———————————————————————
frontend  main *:80
acl url_static       path_beg       -i /static /images /javascript /stylesheets
acl url_static       path_end       -i .jpg .gif .png .css .js#    use_backend static          if url_static
default_backend             app#———————————————————————
# static backend for serving up images, stylesheets and such
#———————————————————————
backend static
balance     roundrobin
server      static 127.0.0.1:4331 check
#———————————————————————
# round robin balancing between the various backends
#———————————————————————
backend app
balance     roundrobin       # sử dụng thuật toán RR
cookie webpool prefix        # sử dụng cookie rewrite
server  app1 172.16.1.132:80 cookie webpool-app1 check
server  app2 172.16.1.133:80 cookie webpool-app2 check
listen stats 192.168.1.200:1991   # lắng nghe qua ip ảo
stats uri /haproxy            # Link truy cập xem status của haproxy
stats enable                  # truy cập qua web
stats auth root:123456        # user và password đăng nhập
Cho haproxy và keepalived khởi động cùng hệ thống
chkconfig haproxy on
chkconfig keepalived on
service haproxy start
Như vậy bạn đã thực hiện xong phần cài đặt và cấu hình haproxy cho hệ thống, sau đó bạn tiến hành các bước kiểm tra và bắt đầu sử dụng hệ thống của bạn.
Link tham khảo:
http://www.haproxy.org/download/1.4/doc/configuration.txt
https://aaronwalrath.wordpress.com/2011/06/28/configure-haproxy-and-keepalived-for-load-balancing-and-reverse-proxy-on-red-hatscientificcentos-linux-56/
1 Comment on Cấu hình Load Balancing và Reverse Proxy sử dụng HAproxy và keepalived cho Web Server trên CentOS 6
Categories: Cũ hơn

Rsync – Remote sync June 9, 2015

Giới thiệu sơ lược:
Rsync (Remote sync), là 1 công cụ đồng bộ dữ liệu( file, thư mục) giữa các remote server hoặc local thường được sử dụng trong môi trường *nix thay cho lệnh cp thông thường.

Một số đặc điểm:

  • Rsync đồng bộ hóa 2 nơi bằng cách copy dữ liệu theo dạng block (mặc định) chứ không copy theo dạng file(có option riêng hỗ trợ) , bên tốc độ được cải thiện nhiều khi áp dụng với file, thư mục có dung lượng lớn.
  • Rsync cho phép mã hóa dữ liệu trong qúa trình tranfer sử dụng ssh, nên qúa trình này được bảo mật.
  • Rsync cho phép tiết kiệm băng thông bằng phương pháp nén dữ liệu ở nguồn và giải nén ở đích, tuy nhiên việc này tốn thêm 1 lượng thời gian đáng kể.
  • Một điểm đặc biệt của rsync là cho phép giữ nguyên được tất cả các thông số của thư mục và file (sử dụng tham số -a)  : Recursive mode, Symbolic links, Permissions, TimeStamp, Owner và group
  • Rsync không yêu cầu quyền super-user.
  • (Xem thêm qua man rsync)

Cài đặt:
Cài đặt tương đối dễ dàng trong tất cả các bản phân phối
Cách sử dụng:
Câu lệnh chung:

rsync -options SRC DEST

  • Đồng bộ hóa trên local:

rsync -a ~/backup-Code/ ~/tmp/

  • Push lên remote server:

rsync -a /home/nhanpt5/backup-Code/ [email protected]:~/Codebk/Push

  • Pull từ remote server:

rsync -a [email protected]:~/Codebk/Push /home/nhanpt5/backup-Code/Pull
Một số tham số cần biết (flags):
-v: hiển thị kết quả
z: dữ liệu trên đường truyền sẽ được nén lại. Có nghĩa là nén ở nguồn và giải nén ở đích, điều này giúp tiết kiệm băng thông khi phải đồng bộ một lượng dữ liệu lớnd
-d: chỉ đồng bộ cây thư mục, không đồng bộ file
-P: quan sát qúa trình đồng bộ dữ liệu
-a: cho phép giữ nguyên được tất cả các thông số của thư mục và file
Một số tùy chọn cần biết (options):
-delete : Xóa file, thư mục ở đích
Sử dụng option –delete nếu bạn ở trong trường hợp sau: Nếu muốn đồng bộ hoàn toàn giữa 2 nơi, các file, folder ở đích mà không tồn tại ở server nguồn sẽ bị xóa bỏ để đảm bảo server đích là bản sao hoàn chỉnh của server nguồn.
-u: không ghi đè dữ liệu ở thư mục đích
Sử dụng option –u trong trường hợp bạn chỉ muốn đồng bộ những file, folder chưa tồn tại ở server đích. Những file đã tồn tại (đã được đồng bộ rồi) thì không đồng bộ nữa.
-existing: không tạo file mới ở đích
Chỉ muốn sync các file đã tồn tại ở đích (kiểu như update), không tạo các file mới. Sử dụng option -existing
-W:
Nếu bạn có băng thông rộng, CPU xử lý tốt, bạn có thể sử dụng option này để copy theo file. Ưu điểm là tốc độ sẽ nhanh hơn, không checksum tại server nguồn và đích. Sử dụng option -W
Ngoài ra còn nhiều tham số khác, tham khảo thêm phần man rsync
Áp dụng:
Rsync không hỗ trợ phần lập lịch tự động backup nên thường được sử dụng kèm với 1 công cụ khác để thực hiện 1 số công việc nhất định. Ví dụ: Dùng crontab kết hợp rsync, ssh để thực hiện việc push dữ liệu lên server hàng ngày. Ta sẽ thực hiện như sau:

Kịch bản:
Backup thư mục ~/Code hàng ngày (server local) và gửi lên server(192.168.1.128) chứa code tại thư mục ~/Codebk
Thiết lập chứng thực ssh bằng private key, đăng nhập server 192.168.1.128 không cần mật khẩu.
1. Dùng script backupfile để nén thư mục: vi ~/backup-Code/backupfile
#!/bin/bash
date=$(date +”%m-%d-%Y”)
filename=$date-backup.zip
source_folder=/home/nhanpt5/Code
dest_folder=/home/nhanpt5/backup-Code
# add folder to zip file
zip -r $dest_folder/$filename $source_folder > /dev/null
Cho chạy vào 3 a.m hàng ngày bằng cron
2. Dùng script tranfer để chuyển file backup lên server và xóa file ở local
#!/bin/bash
date=$(date +”%m-%d-%Y”)
filename=$date-backup.zip
dest_folder=/home/nhanpt5/backup-Code
#tranfer zip file to remote server dùng rsync
rsync -av $dest/$filename  [email protected]:~/Codebk/
#delete zip file
rm -f $dest_folder/$filename
Cho chạy vào 3.30 a.m hàng ngày bằng cron
Thông tin crontab -l
0 3 * * * ~/backup-Code/backupfile
30 3 * * * ~/backup-Code/tranfer >~/backup-Code/bk.log 2>&1
2 Comments on Rsync – Remote sync

Nguyễn Thắng playlist March 18, 2015

List nhạc hay!!!

No Comments on Nguyễn Thắng playlist
Categories: Cũ hơn

Sharepoint 2013 Configuration Wizard fails at step 2 March 6, 2015

Cài đặt Sharepoint Foundation 2013 standard alone trên Windows Server 2008R2 thì gặp lỗi này.

Dùng Powershell để chạy command này:
sharepoing3
Chờ cho nó chạy xong:

Chạy lại trình cài đặt, OK.
Cảm ơn từ bài viết :http://www.adventuresinsharepoint.co.uk/index.php/2013/02/02/configuration-failed-failed-to-create-the-configuration-database/

No Comments on Sharepoint 2013 Configuration Wizard fails at step 2
Categories: Cũ hơn

Fix lỗi NTLDR is missing trên Windows Server January 17, 2015

Gặp lỗi này khi mình Extend cái virtual disk của máy Windows Server 2003 R2 trên ESXi 5.1. Lỗi này xảy ra do thiếu file ntldr do boot loader trên ổ C bị lỗi, 🙁 chắc do mình extend nó gấp quá 😀
Dùng cách Recovery của Windows Server 2003 để fix lỗi này, may mà mình còn đĩa iso để mount vào CD.
Nhấn ESC để chọn boot bằng CD khi vào BIOS.
Khi màn hình Windows Setup hiện lên, chọn R để chuyển sang chế độ Recovery.
Tới màn hình này, đọc kỹ, Enter là thoát ra chứ không phải là đồng ý, bạn gõ “1” để chọn vào C:\Windows.
Tới đây gõ “map” để liệt kê danh sách các ổ có hệ thống, ở đây F:\ là ổ chứa CD
Copy 2 file từ CD vào ổ C:\
Sau đó “Exit” để thoát.
Khởi động lại máy sau khi đã eject CD ra.
Như vậy đã vào lại bình thường.
-.- một ngày đen đủi.

No Comments on Fix lỗi NTLDR is missing trên Windows Server
Categories: Cũ hơn

Progress RDBMS Performance Tuning Tips December 3, 2014

Introduction

According to Adrian Cockroft of Sun, “Performance management is the measurement, analysis, optimization, and procurement of computing resources in order to provide an agreed level of service to the organization and its end users”.
It is a proactive and iterative process. This guide presents a few tips to help you achieve good Progress database performance on multi-user shared-memory systems, such as Unix, VMS, or Windows NT. It does not address application design, database design, network or operating system tuning.
The various suggestions are grouped into several topics listed below.

General Topics – This sections contains remarks are not specific to the Progress RDBMS. They apply to most computer sustems, regardless of the particular software that you are using.
Tools – Various tools that you can use to help analyze system performance and to make adjustments.
Disks – Making the most of your disks.
Block Sizes – Benefits of setting block sizes for the database and transaction logs (bi and ai).
Shared Memory – How to cope with shared memory issues
Processes – Describes the various processes that are part of the database and how to use them
Buffers – Tuning various buffer sizes
Networking – Options for better client-server network performance.
Miscellaneous Topics – Various things that do not fall into any of the other categories listed above.

General Topics

Understand your business goals

What is the purpose of your computer system? You must understand what business goals the system is intended to achieve in order to understand whether it does so well or poorly.

Understand your workload

You must identify the work that your system is doing and how it relates to your business requirements. This is essential so that you will be able to compare performance over time and so that you can tell whether changes in performance are the result of changes made in the tuning process or are the result of changes in the workload. Solving problems in the future will be much easier if you know what has changed. If the workload is increasing, understanding how it is increasing may allow you to predict when you will have to add new resources to the system.

Define the problem

Before you start, you have to know what problem you’re trying to solve. Without a clearly understood and measurable goal, you will waste a lot of time. Define the problem as precisely as you can. For example the statements “Response time for entering new orders is 30 seconds during the first 3 days of the month. It should be no more than 2 seconds.” define a problem and a goal. The statement “My application does not perform well.” is completely meaningless.
Once you have decided what your goal is, make measurements to see where you stand. Then you know how far you have to go. You will also know when you have reached the goal and can stop working.
The two most commonly used measures of a computer system’s performance are throughput and response time. Throughput is the number of operations performed per unit of time and is often expressed in transactions entered per hour, orders processed per day, and the like. Response time is the time from the user’s initiation of an operation until he or she can continue.
Measure the application’s performance as well as the overall system’s performance. Measurements of cpu utilization, disk i/o rates, etc. may show symptoms of problems and provide clues to tell you where to investigate further, but application performance, whether or not the users are satisfied, and whether or not you are meeting your business goals is what matters.

Understand what is “normal”

Use your monitoring tools to collect data when you do not have a problem. Then when you do have a problem, collect new data. If you are familiar with your system’s normal behaviour, you will be able to spot problem symptoms more easily. You can compare your new data to the “normal” data to see what has changed.

If it ain’t broke, don’t fix it

If your system is fine and working and everyone can get their work done on time, don’t fix anything. Leave it alone. Just collect data.

Change one thing at a time

You must be systematic about any changes you make. Often, changing one thing affects another. For example, if you increase the size of the database buffer pool to reduce disk i/o, memory consumption will increase and may cause increased disk activity due to paging. Balancing the use of all your resources should be one of your goals.
Always measure the effect of every change you make to see if you are making things better or worse. If you change two things and one makes things better but the other makes them worse, you won’t know which one.

Learn how to fish

The tips given here are guidelines. They are rules of thumb that are the result of past experience. They will work in many but not all situations. Applications and systems are so complex and different from one another that it is impossible for everyone to configure their system exactly the same way. Each situation will require analysis and thought. To get (and to keep getting) good performance from a large system with many users and complex applications takes time and effort.

Check your system

Make sure that you don’t have a problem unrelated to Progress. Tuning Progress usually can’t compensate for insufficient or unbalanced machine resources. There are three main areas to examine:

  • CPU Utilization: Less than 90% is good. That shows you aren’t trying to use more than you have, and that you have at least some to spare.
  • Disk I/O: A good disk can perform about 60 random or 150 sequential transfers per second. If you have disks whose utilization is consistently above 60%, they are overloaded. Disk usage ought to be balanced so that each disk gets roughly the same amount of activity.
  • Memory: If you don’t have enough memory, the system will be paging (writing data from memory to disk and reading it back again later). This allows the system to create the illusion that it has more memory than there actually is, which can be a good thing. However paging requires additional disk i/o. This takes time and takes away disk capacity for doing useful work. It is difficult to generalize about how much paging is too much because systems vary so much, but more than 5 hard page faults per second is probably something that should be investigated.

Depending on your system’s configuration, you may also need to look at other areas, such as network devices and terminal controllers. NFS mounted file systems are sometimes a source of trouble that is overlooked. Consult your system’s documentation to see if it offers any useful advice. If you have a UNIX system, the “man” pages probably won’t help you much. Some other useful sources of information are:

  • “Unix System V Performance Management”, Phyllis Eve Bergman and Sally Browning ed., published by Prentice Hall. isbn: 0-13-016429-1
  • “Sun Performance and Tuning” by Adrian Cockroft, published by Sunsoft Press. isbn: 013-149642-3
  • “AIX Version 3.2 and 4.1 Performance Tuning Guide”, published by IBM. Order No: SC23-2365-03
  • “System Performance Tuning” by Mike Loukides, published by O’Reilly and Associates, Inc. isbn: 0-937175-60-9
  • “Guide to Performance Management”, a VMS manual, published by Digital.

“Sun Performance and Tuning” is excellent and very useful even if you have some other kind of system. Mr. Cockroft is an excellent writer who knows how to explain complex topics clearly.

Keep in touch with your OS vendor

Most operating system vendors publish performance related documentation. See what yours has to offer. IBM, HP, and Sun also publish performance tuning and anlysis documents on their web sites. Go to the sites and search for “performance”.
Most operating system vendors provide patches to correct operating system problems. Sometimes these patches will be for problems that should be corrected on your system. There is a good chance that your vendor makes patches available on his web site.

Look at your applications

Tuning the system or the database won’t help you much if you have a poorly designed or poorly written application. Look at the the application source if you can. Make sure it is using the indexes you have, you don’t have unneeded indexes, transactions are as short as possible, you are not sorting unnecessarily, use no-undo variables where possible, etc. This is a large topic in its own right and is not addressed here.

Tools

Unix Tools

Some useful tools that are commonly available on Unix systems are:

  • cp – copy a file
  • df – shows available space on filesystems
  • du – shows disk uage by directories and files
  • fuser – identify processes using files or file structures
  • glance – HP-UX system and process activity monitoring tool
  • iostat – report i/o statistics
  • ipcs – show ipc (shared-memory, semaphore, and msg queue) status
  • last – show last login time and date for user or tty
  • lsattr – AIX, shows the attributes of devices
  • lslv – displays information about a logical volume or the logical volume allocations of a physical volume
  • mpstat – show multi-processor statistics
  • netstat – show network status and report statistics
  • nfsstat – show Network File System (NFS) status and report statistics
  • no – displays or sets network options
  • ping – send ICMP Echo request packets to a network host
  • ps – report process status
  • pstat – print system facts
  • sadp – disk access profiler
  • sar – system activity reporter
  • showmount – show all remote mounts
  • spray – send a stream of packets to a network host and report transfer rate
  • time, timex – time the execution of a command
  • top – display information about the top cpu consumer processes
  • trace – trace system calls and signals
  • traceroute – print the route packets take to a network host
  • truss – trace system calls and signals
  • vmstat – report virtual memory, paging, and disk statistics
  • w – who is logged in and what they are doing
  • who – who is logged in on the system
  • whodo – who is logged in and what they are doing

Not every Unix system has all of the tools listed above. Check your system’s documentation to see what you do have on your system.

Windows NT Tools

The following tools are available for Windows NT systems.

  • Performance Monitor – a graphical tool for performance measurement. Includes charting, alerting, and reporting functions.
  • Event Viewer – a tool for monitoring the Windows NT event log
  • Quick Slice – shows active processes and threads with percentage of cpu utilization
  • Process Viewer – shows detailed information about active processes
  • SMS – Windows NT System Management Server

Progress Tools

The following tools are provided by Progress:

  • Promon – a database monitoring/activity reporting tool
  • Proutil dbanalyse – reports space usage, fragmentation, etc on tables and indexes
  • The 4GL Profiler – reports which procedures are called, how often, and how long they take
  • The 4GL – use it to instrument your application
  • Virtual System Tables – Database manager activity, usage and status data from 4GL or SQL

Disks

Use multiple disks

Use the multi-volume feature to put your database on multiple disks. Many small disk drives are better than one or a few large ones. The reason is that the operating system can trasfer data to and from several disks simultaneously. The more drives you have, the higher the total transfer rate can be.
Don’t put anything else (including swap files) on the disks that have the database.
Put the bi file on the fastest disk. Avoid putting most other files on the disk that has the bi file. If you can’t dedicate disks to the database and bi files, try to balance things so that all of the disks have approximately the same amount of activity.

Balance disk usage

To make the most of your disk subsystems, you shouldn’t make one of them work harder than the others. If one disk is overloaded and others are idle, the overloaded disk can be a severe bottleneck that limits performance.
Arrange files and database extents so that the disk activity is approximately equal on all the disks (to within roughly 10%). Consider all sources of disk i/o activity, not just the database. Some other sources of disk activity unrelated to the database itself include:

  • The operating system does swapping and paging
  • Your application may read and write files
  • The application’s r-code is read into memory from files
  • The 4gl interpreter creates temporary files
  • 4gl temporary tables often overflow to disk
  • Sorting query results uses temporary files during the sort
  • Other applications that do disk i/o

High-speed disks rotate at 7200 rpm and allow 80 or more random access transfers per second. Slow disks will allow about 30 random accesses per second.
The various system monitoring tools (for example, sar -d) will report disk utilization in percent. These numbers are based on the proportion of time that one or more processes are waiting for an i/o operation to complete.
A rule of thumb I use for characterizing disk load is given by the table below.

0 to 25 % Low (underutilized)
25 to 40 % Moderate
40 to 60 % Heavy
Above 60 % Overloaded

In addition to utilization, you should examine the average wait time. This is the average amount of time a process had to wait for a disk i/o operation. If you see average waiting times larger than 50 milliseconds, this may indicate that the disk is overloaded even though it may not appear so from a utilization point of view.
But remember: disk activity should be balanced. Make them all work equally hard. Having an overloaded disk and an idle disk is a waste of money.

Two disks are better than one

The storage capacity of disk drives has increased dramatically since 1995 and will continue to do so for the next few years. You can already buy drives with 20 gigabyte storage capacity.
To maximize performance, you are better off with more smaller disks than one or a few large ones. For example, it is better to have four 1 GB disk drives than one 4 GB drive. This is because you can perform only one read or write operation at a time with one disk, but with four you can do four at the same time.Thus the aggregate throughput of four drives is thus much higher than a single drive, even though the capacity is the same.
Disks are relatively cheap now compared to the days when a 5 MB drive cost $10,000. But they still cost money.

Two disk controllers are better than one

A SCSI channel can address up to seven disks or other devices. But you may not want to put that many on one channel if you are interested in the best possible performance.
A “fast and wide” SCSI-2 channel has a theoretical maximum transfer rate of 20 megabytes per second. A single disk drive can provide a sustainable data rate of roughly 4 magabytes per second. This implies that you should have no more than 4 drives per fast and wide SCSI-2 controller.
Standard SCSI channels can sustain a transfer rate of approximately 5 megabytes per second. This is one fourth of fast and wide SCSI-2.
In general, several controllers with disk drives distributed evenly over them will give better performance than a single controller.

Use disk striping

If your operating system supports disk striping, consider using it. Striping allows you to spread one or more files across several disks in a uniform manner. This can improve performance by balancing the activity on all your disks so that they are accessed approximately the same amount.

Use raw disks – maybe

A raw disk or raw partition is a contiguous section of disk space that does not have a filesystem on it. You can place any or all fixed length extents of a Progress database on raw disks. Using raw disks might improve performance by up to 20% in some circumstances, but they have many disadvantages. Some of them are:

  • Defining, keeping track of, and reconfiguring databases that use raw partitions is harder than for databases that use files.
  • Raw partitions are fixed size. You can’t change their size when you want to.
  • You usually can’t use the same tool to back up raw partitions as you use for filesystems. Instead, you must make abckup of the disk the partition is on, define new partitions and filesystems, and restore your backups.
  • The operating system does not know what is stored on a raw partition. It looks the same if it is empty or if it has a database extent on it or if it has five database extents on it.
  • Since raw partitions are accessed without using the system’s buffer pool, you will have to increase the size of the Progress buffer pool and decrease the size of the Unix buffer pool to get an equivalent amount of database page buffering. This could affect other applications’ performance.

See “Progress In the Raw” for more information.

Avoid IDE disk controllers

IDE disk controllers were designed for inexpensive single-user personal computers running some bogus software called DOS (Dog Operating System). These disk controllers transfer data from the controller to memory one byte at a time. This is maximally ungood. But it is why they are so cheap. The manufacturesrs should pay people to take them.
Unless you are a dog, don’t use them. If you have them in your system, perform the following steps immediately.

  • make an act of contrition,
  • make backups,
  • turn off the computer system,
  • open the case,
  • remove the thing,
  • throw it out the window,
  • go to the store and buy a reputable brand of SCSI disk controller,
  • don’t forget to buy new disks that work with SCSI controllers.

Avoid RAID 5 Configurations

In RAID 5 disk configurations, data are striped across several disks along with “parity” data. The parity data is distributed across the drives in such a way that a data block and its parity information are always written to different devices. This technique allows reconstruction of all data that was present on a drive that has failed. RAID 5 systems seem attractive because they are resilient to a single disk failure but cost much less than a fully mirrorred configuration.
Read performance can be quite good, but write performance will be terrible. This is because the parity data must be updated whenever a block is written. In the worst case, writing a single database block requires four i/o operations. The following operations are performed internally by the RAID 5 system:

  • Read the old data group
  • Read the old parity data
  • Merge the new database block into the old data group
  • Compute new parity data
  • Write the new data
  • Write the new parity data

“But since the data and parity are on separate drives, they can be read in parallel” you say. Yes that is true. But TANSTAAFL (There Ain’t No Such Thing As A Free Lunch). Reading two disks at the same time uses up half your disk bandwidth.
For more information about RAID configurations, see Raiders of the Lost Disk.

Block Sizes

Progress allows you to control the size of the database, before-image log, and after-image log files. You should increase all of them from their default values. For best performance, Progress block sizes should be the same size or a multiple of the operating system’s block size.

Set the database block size

The default block size for the database is 1024 bytes on most systems. You can specify another size when you create a database. By setting it to 8192 bytes (8 kilobytes), you can improve database i/o performance signigifcantly. This is because with larger block sizes, you get more bang for your buck – more data are transferred in each i/o operation. Also, because index compression works at the index block level, indexes will compress better with larger block sizes. Writing (or reading) 8 kilobytes takes very nearly the same amount of time as it does to write 1 kilobyte. You set the database block size while creating a database with the command

prostrct create mydb -blocksize 8192

Don’t forget to adjust the value of the buffer pool size (startup parameter -B) to account for the larger buffers. -B is specified as the number of buffers.
What is the best block size? It depends. It depends on your application and your data. In general, larger block sizes are probably better than small ones, but if you have many small records, you may end up using more disk space because no more than 64 records can be stored in an 8 k block and no more than 32 in the smaller sizes.

Set the before-image log block size

The default for the before-image log’s block size is the same as the database block size. Unless you are using 8 kilobyte database blocks, you should change the before-image log’s block size to 8 kilobytes. On most systems, this will give the highest I/O throughput. On some, 16 Kilobytes will give slightly better throughput, but the difference is usually small enough that it doesn’t really matter.
Set the before-image block size to 8 kilobytes. You do this by specifying the -biblocksize option while truncating the bi file. e.g.

proutil mydb -C truncate bi -biblocksize 8

Set the before-image log cluster size

Space in the before-image (bi) file is allocated in units called “clusters”. Whenever Progress fills a bi cluster, it performs an operation called a “checkpoint” to synchronize disk resident copy of the database with what is in memory. This is done to limit the amount of work required during crash recovery or restart and also to allow bi clusters to be reused when the data they contain is no longer needed.
Set the before-image cluster size to at least 1024 kilobytes. You do this by specifying the -bi option while truncating the bi file. e.g.

proutil mydb -C truncate bi -bi 1024

Note that when the bi file is initialized after you have truncated it, it will be expanded to 4 clusters. Make sure you have enough free disk space.
The benefit of increasing the cluster size is that page writers will have enough time do do the necessary i/o in the background. But you only need to make the clusters large enough so that the page writers can work effectively.
Disadvantages of increasing the cluster size are that restart and crash recovery will take longer, and when the bi file has to be expanded, it is expanded in larger chunks.
If you don’t use page writers, increasing the cluster size can cause long checkpoint completion times (2 minutes or more), especially if the buffer pool is large. These are observable as periods when no database update activity, transaction starts, or transaction ends can occur.
It is not unreasonable to set the cluster size to 1024 kilobytes or more, but sizes larger than 8192 kilobytes are probably overkill for most installations.

Set the after-image log block size

If you are using after-image journalling, change the after-image log’s block size. The default after-image log’s block size is the same as the database block size. Unless you are using 8 kilobyte database blocks, this is too small. Set the after-image log block size to 8 kilobytes. You do this with the command:

rfutil mydb -C truncate ai -aiblocksize 8192

Shared Memory

Use spinlocks

On multi-processor systems, you can use spinlocks to improve internal resource sharing among database processes. All shared resources must be locked while they are being used, typically for periods on the order of a few microseconds. Spinlocks are essentially loops that retry continuously when an attempt to lock a shared resource fails. After some number of retries, the process will go to sleep for a short time. This is termed a “latch timeout”. The number of retries before sleeping is controlled by the -spin parameter.
Tuning -spin essentially means increasing its value until the number of latch timeouts no longer decreases. Increasing -spin will also cause an increase in cpu consumption, so you have to stop increasing it when cpu consumption gets above 90%.
Start by setting -spin to about 5000. This should be a good starting point. On systems with only a few cpus (2 or 3), you may find that cpu utilization becomes excessive (over 90%). If so, try smaller values. If cpu utilization is less than 90%, you can increase -spin. Try 10000 or 15000. You can adjust -spin from promon while the database is running.

Processes

Progress provides several types of background processes that improve performance. You should use them. Remember to increase the number of users startup parameter (-n) to account for background processes.

Use page writers

Asynchronous page writers (apw’s) are background processes whose job is to write updated database blocks to disk as needed so that servers do not have to take the time to do these writes. This gives them more time to do useful work on behalf of clients. Among their virtues are:

  • Checkpoints take less time because there are fewer modified pages and the page writers help with the checkpoint operation.
  • A supply of unmodified buffers is available for servers to read database pages from disk. They don’t have to write dirty pages first.
  • The lru chain does not become clogged with dirty pages at the oldest end so search time is reduced.

Page writers are self-tuning. Although there are parameters that affect their operation, you should not use them. The default values have been shown to be correct. The choice of how many apw’s to start is the only thing you have to worry about and you choose that by starting with a small number (1 or 2). Then let the system run for awhile and look at the Checkpoint display in promon. It shows what happened during the last 8 checkpoints.
See if the number of buffers flushed (the rightmost column) is consistently 0 or close to zero. If it is, you have enough apw’s and they are keeping up with the load. If you see 1 digit numbers, you are close to the edge. If you see higher numbers, then start another apw to see if it is enough.
The buffers flushed column indicates if any buffers were NOT written in the background during the asynchronous checkpoint. When a checkpoint ends (which happens at the same time that a cluster fills), any buffers left on the checkpoint queue must be written immediately. NO database changes can occur until those writes have been completed. This is because there is no space to write additional bi notes until the next cluster is opened.
This can only happen if the apw’s cannot do all the scheduled writes. There are 4 major causes:

  • You are not using apw’s in the first place.
  • The bi cluster size is too small. Checkpoints might occur so close together that the apw’s don’t have enough time to do their work.
  • The disk subsystem can’t sustain the required i/o rate. For example, a database stored on a single disk is likely to suffer from this problem.
  • You don’t have enough apw’s running to perform the required writes.

Don’t use page writers for read-only databases. They have no modified pages.

Use the before-image log writer

The before-image log writer (biw) is a background process that writes filled before-image buffers to disk. Always use the before-image writer (biw). There are no tuning parameters for it. Unlike page writers, you only need one before-image writer.

Use the after-image log writer

The after-image log writer (aiw) is a background process that writes filled after image buffers to disk. If you are using after imaging journalling, use the after image writer. There are no tuning parameters for it. Unlike page writers, you only need one after-image writer.

Use the watchdog

Every process that connects to the database must make use of various shared resources in order to operate. Access to shared resources is regulated by a system of locks. When a process accesses shared data, it first locks them to gain exclusive access and releases the lock when the operation is done. If a process should be killed, it will not be able to release the locks it holds. No other process will be able to access the locked resource, but the lock holder cannot release the lock.
The watchdog’s job is to deal with such cases. It is a background process that periodically checks to see if another process has died or disappeared without disconnecting from the database. If it finds such a situation, it will assume the identity of the lost process, undo its current transaction if one exists, and release all its locked resources. While this almost always works successfully, it can fail on rare occasions if the missing process left the locked resources in an inconsistent state.

Run all Progress processes at the same priority

All Progress processes should have the same system scheduling priority. This is so because they share database resources. If a low priority process should lock a shared resource, higher priority processes will have to wait to access the resource. But the low priority process may not be able to finish using it and release it because the system will not schedule it due to its low priority.

Buffers

Tune the database buffer pool

The purpose of the database buffer pool is to cache soon-to-be-needed database pages (blocks) in (shared) memory to avoid disk i/o. The -B parameter determines the number of blocks that are kept in memory. When a Progress process wants to access a database block, it looks in the buffer pool to see if the block is there. If it is, then a disk read has been avoided and time saved. Progress uses the “least recently used” (lru) algorithm to predict the future to decide which blocks to keep in memory.
The default value for -B is 8 times the number of users (-n). If the buffer pool hit rate is below 90%, increase it. If the hit rate is above 95%, you probably have a large enough buffer pool, unless the number of database reads is high.
The optimum value is a function of the application, database size, number and speed of the disks and controllers and other factors. The default is probably wrong for everyone.
As a rule of thumb, a decent disk can sustain up to 50 random i/o operations per second or 100 sequential i/o operations. Some disks are faster, some slower. Regardless of the hit rate, if the total number of database reads and writes per second approaches 30 times the number of disks the database is stored on, increasing the size of the buffer pool can help.
As you increase -B, make sure that you don’t cause paging or swapping due to the increased shared memory area size. The buffer pool is by far the largest data structure used by the database manager. Don’t forget that you are specifying the number of buffers. The amount of memory required by the buffer pool is approximately (130 + database block size in bytes) * number of buffers.

Set the number of before-image log buffers

Set -bibufs to 15. If promon shows more than 5 % bi buffer waits, try setting -bibufs to 30. Values higher than 30 are not going to make any difference and will only waste memory.

Set the number of after-image log buffers.

Set -aibufs to twice the number of before-image log buffers if you are using after imaging.

Networking

Use TCP/IP, Avoid all others.

Do not use the spx/ipx protocols. ipx is very inefficient for client-server communications. The maximum message size is 512 bytes. Messages longer than that must be split up and sent as several messages. Receipt of each message fragment must be acknowledged be for the next one can be sent. This is very ungood. The tcp/ip protocol is much better and gives much better performance.
“But what about (insert favorite protocol name here) ?” you say. Well, perhaps ther are some good reasons for why you would use it. But this article is about performance. Use tcp/ip. The other protocols are all dead anyway.

Increase the network message buffer size

The -Mm parameter determines the size of the message buffers Progress uses for sending and receiving messages in client-server configurations. Using tcp/ip, it is much more efficient to send one 1,000 byte long message than it is to send ten 100 byte long messages.
The default value of the message buffer size is 1024 bytes. You should increase it to at least 4096. Use a value that is a multiple of 4096. This will allow the server to send much more data per network message when it can. When there are less data than a full buffer, shorter messages will be transmitted. For example, if the buffer size is 16384, the server can send any size message up to 16384 bytes without dividing into several fragments.

Use traceroute

The traceroute utility is a great help in determining how far tcp/ip messages have to travel to reach their destination and how long it takes for them to get there. traceroute (sometimes calledtracert) is a public domain or shareware utility and is available for all operating systems. Use it to find out if the path between client and server is longer than you expected. You may find your messages going through 4 routers you didn’t know about

Miscellaneous Topics

Keep records

“Good judgement comes from experience. Experience comes from bad judgement.”
“Experience is what you get when you don’t get what you want.”
“To predict the future, you must know the past.”

You should keep records of what you do, both what works and what doesn’t. Next year when the same problem occurs, you might not be able to remember what you did to solve it.
You can spot trends and make forecasts when you collect data over a long enough time. For example, as you add users, you can probably tell when you will have to add more memory to your system.
When you get promoted, the next person who gets your job won’t have to start over.

Use the -q option

Normally, Progress searches PROPATH directories when looking for a procedure to make sure that a newer version of the file will be used if one exists. This is desirable during development.
The -q option tells Progress to search PROPATH directories only on the first use of a procedure. After that if the procedure still resides in memory or in the local session-compiled file, Progress uses that version rather than searching the directories again. This reduces the overhead for finding a procedure.

Be willing to experiment

I know: You have a business to run, your system has to be up 36 hours a day, you’re busy, everybody else is busy. There are a million reasons for not experimenting. But…”Nothing ventured, nothing gained.”

Get expert help if you need it

If you don’t know how to solve a problem, find someone who does. There are many sources of assistance, including consultants who earn their living by helping Progress customers, Progress Software’s own consulting services, books, newgroups. and so on. Among them are:

  • Mr. John Campbell has published several useful and interesting Progress books. Among them are:
        “High Performance Coding: A Guide to Efficient Reports and Programs”
        “Making Good Progress”
      “Work Smarter, Not Harder”

    All of the above are available from:

      white star software, po box 51623, palo alto, ca 94303.

    Mr. Campbell’s telephone number is: 4158570686. He can also be reached via e-mail at [email protected] and on the web at www.wss.com

  • Mr. Dan Foreman‘s “Progress Performance Tuning Guide” is an excellent reference.Mr. Foreman can be reached by telephone at 7704499696 and via e-mail at [email protected] and on the web at www.usiatl.com.
  • RTFM: The Progress manuals
      “System Administration Guide”, “System Administration Reference”, and “Database Design Guide” will be useful.
  • The Internet offers a wealth of information. Check out the following links:
  • You can also get a wealth of information at the Progress User Conferences. Most conferences offer one or more sessions related to performance tuning. Members of the database development team are always at the conference to speak with customers and answer questions. Copies of the proceedings for past conferences can be obtained from Progress Software Corp., but not all issues are available.
  • A Performance Tuning Workshop is usually offered during or immediately after the annual user conferences. According to customers who have participated, it is well worth the extra money and time.
  • Progress Consulting Services can be reached at 6172804290
  • Progress Education Services can be reached at 8004776473.4452

http://www.fast4gl.com/downloads/monographs/tuning/tuning.html#misc

No Comments on Progress RDBMS Performance Tuning Tips
Categories: Cũ hơn

Những lưu ý khi convert P2V Windows server 2003 với VMWare vCenter Converter October 12, 2014

Đi 1 ngày đàng, học 1 sàng khôn quả không sai!

Task: Convert máy vật lý Windows server 2003 32 bit thành máy ảo, mọi việc vẫn diễn ra tốt đẹp, sau khi task antivirus và tường lửa của máy đích, cung cấp IP, administrator user và password, và chờ đợi.
Bắt đầu công việc lúc 8h30, server đích dung lượng khoảng ~300GB, vẫn đang chạy live ứng dụng, tốc độ thực hiện có vẻ không được nhanh như mong muốn, tới gần 12h mới được 50% tiến trình.
Khoảng 3h30, thanh process chỉ tới 98% , sắp vui mừng để hoàn thành công việc thì đột nhiên, VMWare thông báo lỗi như sau:
unable to create ‘\\.\xxx\$Reconfig$’
Không kịp chụp hình @@
Còn trên con server đích thì log event của nó báo error quá trời….
unable to create \\.\vstor2-mntapi-shared.xxxx
Không biết xử lý sao, đành phải google thôi~~ phí gần 8h convert mà thành quả thế này.
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1030145
Bản chất lỗi: Lỗi xảy ra khi trong boot.ini được thiết lập cho server có thêm tham số \3GB
Cách mở boot.ini

Edit the Boot.ini File

To view and edit the Boot.ini file:

  1. Right-click My Computer, and then click Properties.
    -or-

    Click Start, click Run, type sysdm.cpl, and then click OK.

  2. On the Advanced tab, click Settings under Startup and Recovery.
  3. Under System Startup, click Edit.

và thực hiện bỏ lựa chọn \3GB, restart lại server và thực hiện lại từ đầu.
Task hoàn thành vào 10h41 đêm đó.
=.= viết như cục shit 😀

No Comments on Những lưu ý khi convert P2V Windows server 2003 với VMWare vCenter Converter