Chuyện của sys

DevOps Blog

Rsync – Remote sync June 9, 2015

Giới thiệu sơ lược:
Rsync (Remote sync), là 1 công cụ đồng bộ dữ liệu( file, thư mục) giữa các remote server hoặc local thường được sử dụng trong môi trường *nix thay cho lệnh cp thông thường.

Một số đặc điểm:

  • Rsync đồng bộ hóa 2 nơi bằng cách copy dữ liệu theo dạng block (mặc định) chứ không copy theo dạng file(có option riêng hỗ trợ) , bên tốc độ được cải thiện nhiều khi áp dụng với file, thư mục có dung lượng lớn.
  • Rsync cho phép mã hóa dữ liệu trong qúa trình tranfer sử dụng ssh, nên qúa trình này được bảo mật.
  • Rsync cho phép tiết kiệm băng thông bằng phương pháp nén dữ liệu ở nguồn và giải nén ở đích, tuy nhiên việc này tốn thêm 1 lượng thời gian đáng kể.
  • Một điểm đặc biệt của rsync là cho phép giữ nguyên được tất cả các thông số của thư mục và file (sử dụng tham số -a)  : Recursive mode, Symbolic links, Permissions, TimeStamp, Owner và group
  • Rsync không yêu cầu quyền super-user.
  • (Xem thêm qua man rsync)

Cài đặt:
Cài đặt tương đối dễ dàng trong tất cả các bản phân phối
Cách sử dụng:
Câu lệnh chung:

rsync -options SRC DEST

  • Đồng bộ hóa trên local:

rsync -a ~/backup-Code/ ~/tmp/

  • Push lên remote server:

rsync -a /home/nhanpt5/backup-Code/ [email protected]:~/Codebk/Push

  • Pull từ remote server:

rsync -a [email protected]:~/Codebk/Push /home/nhanpt5/backup-Code/Pull
Một số tham số cần biết (flags):
-v: hiển thị kết quả
z: dữ liệu trên đường truyền sẽ được nén lại. Có nghĩa là nén ở nguồn và giải nén ở đích, điều này giúp tiết kiệm băng thông khi phải đồng bộ một lượng dữ liệu lớnd
-d: chỉ đồng bộ cây thư mục, không đồng bộ file
-P: quan sát qúa trình đồng bộ dữ liệu
-a: cho phép giữ nguyên được tất cả các thông số của thư mục và file
Một số tùy chọn cần biết (options):
-delete : Xóa file, thư mục ở đích
Sử dụng option –delete nếu bạn ở trong trường hợp sau: Nếu muốn đồng bộ hoàn toàn giữa 2 nơi, các file, folder ở đích mà không tồn tại ở server nguồn sẽ bị xóa bỏ để đảm bảo server đích là bản sao hoàn chỉnh của server nguồn.
-u: không ghi đè dữ liệu ở thư mục đích
Sử dụng option –u trong trường hợp bạn chỉ muốn đồng bộ những file, folder chưa tồn tại ở server đích. Những file đã tồn tại (đã được đồng bộ rồi) thì không đồng bộ nữa.
-existing: không tạo file mới ở đích
Chỉ muốn sync các file đã tồn tại ở đích (kiểu như update), không tạo các file mới. Sử dụng option -existing
-W:
Nếu bạn có băng thông rộng, CPU xử lý tốt, bạn có thể sử dụng option này để copy theo file. Ưu điểm là tốc độ sẽ nhanh hơn, không checksum tại server nguồn và đích. Sử dụng option -W
Ngoài ra còn nhiều tham số khác, tham khảo thêm phần man rsync
Áp dụng:
Rsync không hỗ trợ phần lập lịch tự động backup nên thường được sử dụng kèm với 1 công cụ khác để thực hiện 1 số công việc nhất định. Ví dụ: Dùng crontab kết hợp rsync, ssh để thực hiện việc push dữ liệu lên server hàng ngày. Ta sẽ thực hiện như sau:

Kịch bản:
Backup thư mục ~/Code hàng ngày (server local) và gửi lên server(192.168.1.128) chứa code tại thư mục ~/Codebk
Thiết lập chứng thực ssh bằng private key, đăng nhập server 192.168.1.128 không cần mật khẩu.
1. Dùng script backupfile để nén thư mục: vi ~/backup-Code/backupfile
#!/bin/bash
date=$(date +”%m-%d-%Y”)
filename=$date-backup.zip
source_folder=/home/nhanpt5/Code
dest_folder=/home/nhanpt5/backup-Code
# add folder to zip file
zip -r $dest_folder/$filename $source_folder > /dev/null
Cho chạy vào 3 a.m hàng ngày bằng cron
2. Dùng script tranfer để chuyển file backup lên server và xóa file ở local
#!/bin/bash
date=$(date +”%m-%d-%Y”)
filename=$date-backup.zip
dest_folder=/home/nhanpt5/backup-Code
#tranfer zip file to remote server dùng rsync
rsync -av $dest/$filename  [email protected]:~/Codebk/
#delete zip file
rm -f $dest_folder/$filename
Cho chạy vào 3.30 a.m hàng ngày bằng cron
Thông tin crontab -l
0 3 * * * ~/backup-Code/backupfile
30 3 * * * ~/backup-Code/tranfer >~/backup-Code/bk.log 2>&1
2 Comments on Rsync – Remote sync

Nguyễn Thắng playlist March 18, 2015

List nhạc hay!!!

No Comments on Nguyễn Thắng playlist
Categories: Cũ hơn

Sharepoint 2013 Configuration Wizard fails at step 2 March 6, 2015

Cài đặt Sharepoint Foundation 2013 standard alone trên Windows Server 2008R2 thì gặp lỗi này.

Dùng Powershell để chạy command này:
sharepoing3
Chờ cho nó chạy xong:

Chạy lại trình cài đặt, OK.
Cảm ơn từ bài viết :http://www.adventuresinsharepoint.co.uk/index.php/2013/02/02/configuration-failed-failed-to-create-the-configuration-database/

No Comments on Sharepoint 2013 Configuration Wizard fails at step 2
Categories: Cũ hơn

Fix lỗi NTLDR is missing trên Windows Server January 17, 2015

Gặp lỗi này khi mình Extend cái virtual disk của máy Windows Server 2003 R2 trên ESXi 5.1. Lỗi này xảy ra do thiếu file ntldr do boot loader trên ổ C bị lỗi, 🙁 chắc do mình extend nó gấp quá 😀
Dùng cách Recovery của Windows Server 2003 để fix lỗi này, may mà mình còn đĩa iso để mount vào CD.
Nhấn ESC để chọn boot bằng CD khi vào BIOS.
Khi màn hình Windows Setup hiện lên, chọn R để chuyển sang chế độ Recovery.
Tới màn hình này, đọc kỹ, Enter là thoát ra chứ không phải là đồng ý, bạn gõ “1” để chọn vào C:\Windows.
Tới đây gõ “map” để liệt kê danh sách các ổ có hệ thống, ở đây F:\ là ổ chứa CD
Copy 2 file từ CD vào ổ C:\
Sau đó “Exit” để thoát.
Khởi động lại máy sau khi đã eject CD ra.
Như vậy đã vào lại bình thường.
-.- một ngày đen đủi.

No Comments on Fix lỗi NTLDR is missing trên Windows Server
Categories: Cũ hơn

Progress RDBMS Performance Tuning Tips December 3, 2014

Introduction

According to Adrian Cockroft of Sun, “Performance management is the measurement, analysis, optimization, and procurement of computing resources in order to provide an agreed level of service to the organization and its end users”.
It is a proactive and iterative process. This guide presents a few tips to help you achieve good Progress database performance on multi-user shared-memory systems, such as Unix, VMS, or Windows NT. It does not address application design, database design, network or operating system tuning.
The various suggestions are grouped into several topics listed below.

General Topics – This sections contains remarks are not specific to the Progress RDBMS. They apply to most computer sustems, regardless of the particular software that you are using.
Tools – Various tools that you can use to help analyze system performance and to make adjustments.
Disks – Making the most of your disks.
Block Sizes – Benefits of setting block sizes for the database and transaction logs (bi and ai).
Shared Memory – How to cope with shared memory issues
Processes – Describes the various processes that are part of the database and how to use them
Buffers – Tuning various buffer sizes
Networking – Options for better client-server network performance.
Miscellaneous Topics – Various things that do not fall into any of the other categories listed above.

General Topics

Understand your business goals

What is the purpose of your computer system? You must understand what business goals the system is intended to achieve in order to understand whether it does so well or poorly.

Understand your workload

You must identify the work that your system is doing and how it relates to your business requirements. This is essential so that you will be able to compare performance over time and so that you can tell whether changes in performance are the result of changes made in the tuning process or are the result of changes in the workload. Solving problems in the future will be much easier if you know what has changed. If the workload is increasing, understanding how it is increasing may allow you to predict when you will have to add new resources to the system.

Define the problem

Before you start, you have to know what problem you’re trying to solve. Without a clearly understood and measurable goal, you will waste a lot of time. Define the problem as precisely as you can. For example the statements “Response time for entering new orders is 30 seconds during the first 3 days of the month. It should be no more than 2 seconds.” define a problem and a goal. The statement “My application does not perform well.” is completely meaningless.
Once you have decided what your goal is, make measurements to see where you stand. Then you know how far you have to go. You will also know when you have reached the goal and can stop working.
The two most commonly used measures of a computer system’s performance are throughput and response time. Throughput is the number of operations performed per unit of time and is often expressed in transactions entered per hour, orders processed per day, and the like. Response time is the time from the user’s initiation of an operation until he or she can continue.
Measure the application’s performance as well as the overall system’s performance. Measurements of cpu utilization, disk i/o rates, etc. may show symptoms of problems and provide clues to tell you where to investigate further, but application performance, whether or not the users are satisfied, and whether or not you are meeting your business goals is what matters.

Understand what is “normal”

Use your monitoring tools to collect data when you do not have a problem. Then when you do have a problem, collect new data. If you are familiar with your system’s normal behaviour, you will be able to spot problem symptoms more easily. You can compare your new data to the “normal” data to see what has changed.

If it ain’t broke, don’t fix it

If your system is fine and working and everyone can get their work done on time, don’t fix anything. Leave it alone. Just collect data.

Change one thing at a time

You must be systematic about any changes you make. Often, changing one thing affects another. For example, if you increase the size of the database buffer pool to reduce disk i/o, memory consumption will increase and may cause increased disk activity due to paging. Balancing the use of all your resources should be one of your goals.
Always measure the effect of every change you make to see if you are making things better or worse. If you change two things and one makes things better but the other makes them worse, you won’t know which one.

Learn how to fish

The tips given here are guidelines. They are rules of thumb that are the result of past experience. They will work in many but not all situations. Applications and systems are so complex and different from one another that it is impossible for everyone to configure their system exactly the same way. Each situation will require analysis and thought. To get (and to keep getting) good performance from a large system with many users and complex applications takes time and effort.

Check your system

Make sure that you don’t have a problem unrelated to Progress. Tuning Progress usually can’t compensate for insufficient or unbalanced machine resources. There are three main areas to examine:

  • CPU Utilization: Less than 90% is good. That shows you aren’t trying to use more than you have, and that you have at least some to spare.
  • Disk I/O: A good disk can perform about 60 random or 150 sequential transfers per second. If you have disks whose utilization is consistently above 60%, they are overloaded. Disk usage ought to be balanced so that each disk gets roughly the same amount of activity.
  • Memory: If you don’t have enough memory, the system will be paging (writing data from memory to disk and reading it back again later). This allows the system to create the illusion that it has more memory than there actually is, which can be a good thing. However paging requires additional disk i/o. This takes time and takes away disk capacity for doing useful work. It is difficult to generalize about how much paging is too much because systems vary so much, but more than 5 hard page faults per second is probably something that should be investigated.

Depending on your system’s configuration, you may also need to look at other areas, such as network devices and terminal controllers. NFS mounted file systems are sometimes a source of trouble that is overlooked. Consult your system’s documentation to see if it offers any useful advice. If you have a UNIX system, the “man” pages probably won’t help you much. Some other useful sources of information are:

  • “Unix System V Performance Management”, Phyllis Eve Bergman and Sally Browning ed., published by Prentice Hall. isbn: 0-13-016429-1
  • “Sun Performance and Tuning” by Adrian Cockroft, published by Sunsoft Press. isbn: 013-149642-3
  • “AIX Version 3.2 and 4.1 Performance Tuning Guide”, published by IBM. Order No: SC23-2365-03
  • “System Performance Tuning” by Mike Loukides, published by O’Reilly and Associates, Inc. isbn: 0-937175-60-9
  • “Guide to Performance Management”, a VMS manual, published by Digital.

“Sun Performance and Tuning” is excellent and very useful even if you have some other kind of system. Mr. Cockroft is an excellent writer who knows how to explain complex topics clearly.

Keep in touch with your OS vendor

Most operating system vendors publish performance related documentation. See what yours has to offer. IBM, HP, and Sun also publish performance tuning and anlysis documents on their web sites. Go to the sites and search for “performance”.
Most operating system vendors provide patches to correct operating system problems. Sometimes these patches will be for problems that should be corrected on your system. There is a good chance that your vendor makes patches available on his web site.

Look at your applications

Tuning the system or the database won’t help you much if you have a poorly designed or poorly written application. Look at the the application source if you can. Make sure it is using the indexes you have, you don’t have unneeded indexes, transactions are as short as possible, you are not sorting unnecessarily, use no-undo variables where possible, etc. This is a large topic in its own right and is not addressed here.

Tools

Unix Tools

Some useful tools that are commonly available on Unix systems are:

  • cp – copy a file
  • df – shows available space on filesystems
  • du – shows disk uage by directories and files
  • fuser – identify processes using files or file structures
  • glance – HP-UX system and process activity monitoring tool
  • iostat – report i/o statistics
  • ipcs – show ipc (shared-memory, semaphore, and msg queue) status
  • last – show last login time and date for user or tty
  • lsattr – AIX, shows the attributes of devices
  • lslv – displays information about a logical volume or the logical volume allocations of a physical volume
  • mpstat – show multi-processor statistics
  • netstat – show network status and report statistics
  • nfsstat – show Network File System (NFS) status and report statistics
  • no – displays or sets network options
  • ping – send ICMP Echo request packets to a network host
  • ps – report process status
  • pstat – print system facts
  • sadp – disk access profiler
  • sar – system activity reporter
  • showmount – show all remote mounts
  • spray – send a stream of packets to a network host and report transfer rate
  • time, timex – time the execution of a command
  • top – display information about the top cpu consumer processes
  • trace – trace system calls and signals
  • traceroute – print the route packets take to a network host
  • truss – trace system calls and signals
  • vmstat – report virtual memory, paging, and disk statistics
  • w – who is logged in and what they are doing
  • who – who is logged in on the system
  • whodo – who is logged in and what they are doing

Not every Unix system has all of the tools listed above. Check your system’s documentation to see what you do have on your system.

Windows NT Tools

The following tools are available for Windows NT systems.

  • Performance Monitor – a graphical tool for performance measurement. Includes charting, alerting, and reporting functions.
  • Event Viewer – a tool for monitoring the Windows NT event log
  • Quick Slice – shows active processes and threads with percentage of cpu utilization
  • Process Viewer – shows detailed information about active processes
  • SMS – Windows NT System Management Server

Progress Tools

The following tools are provided by Progress:

  • Promon – a database monitoring/activity reporting tool
  • Proutil dbanalyse – reports space usage, fragmentation, etc on tables and indexes
  • The 4GL Profiler – reports which procedures are called, how often, and how long they take
  • The 4GL – use it to instrument your application
  • Virtual System Tables – Database manager activity, usage and status data from 4GL or SQL

Disks

Use multiple disks

Use the multi-volume feature to put your database on multiple disks. Many small disk drives are better than one or a few large ones. The reason is that the operating system can trasfer data to and from several disks simultaneously. The more drives you have, the higher the total transfer rate can be.
Don’t put anything else (including swap files) on the disks that have the database.
Put the bi file on the fastest disk. Avoid putting most other files on the disk that has the bi file. If you can’t dedicate disks to the database and bi files, try to balance things so that all of the disks have approximately the same amount of activity.

Balance disk usage

To make the most of your disk subsystems, you shouldn’t make one of them work harder than the others. If one disk is overloaded and others are idle, the overloaded disk can be a severe bottleneck that limits performance.
Arrange files and database extents so that the disk activity is approximately equal on all the disks (to within roughly 10%). Consider all sources of disk i/o activity, not just the database. Some other sources of disk activity unrelated to the database itself include:

  • The operating system does swapping and paging
  • Your application may read and write files
  • The application’s r-code is read into memory from files
  • The 4gl interpreter creates temporary files
  • 4gl temporary tables often overflow to disk
  • Sorting query results uses temporary files during the sort
  • Other applications that do disk i/o

High-speed disks rotate at 7200 rpm and allow 80 or more random access transfers per second. Slow disks will allow about 30 random accesses per second.
The various system monitoring tools (for example, sar -d) will report disk utilization in percent. These numbers are based on the proportion of time that one or more processes are waiting for an i/o operation to complete.
A rule of thumb I use for characterizing disk load is given by the table below.

0 to 25 % Low (underutilized)
25 to 40 % Moderate
40 to 60 % Heavy
Above 60 % Overloaded

In addition to utilization, you should examine the average wait time. This is the average amount of time a process had to wait for a disk i/o operation. If you see average waiting times larger than 50 milliseconds, this may indicate that the disk is overloaded even though it may not appear so from a utilization point of view.
But remember: disk activity should be balanced. Make them all work equally hard. Having an overloaded disk and an idle disk is a waste of money.

Two disks are better than one

The storage capacity of disk drives has increased dramatically since 1995 and will continue to do so for the next few years. You can already buy drives with 20 gigabyte storage capacity.
To maximize performance, you are better off with more smaller disks than one or a few large ones. For example, it is better to have four 1 GB disk drives than one 4 GB drive. This is because you can perform only one read or write operation at a time with one disk, but with four you can do four at the same time.Thus the aggregate throughput of four drives is thus much higher than a single drive, even though the capacity is the same.
Disks are relatively cheap now compared to the days when a 5 MB drive cost $10,000. But they still cost money.

Two disk controllers are better than one

A SCSI channel can address up to seven disks or other devices. But you may not want to put that many on one channel if you are interested in the best possible performance.
A “fast and wide” SCSI-2 channel has a theoretical maximum transfer rate of 20 megabytes per second. A single disk drive can provide a sustainable data rate of roughly 4 magabytes per second. This implies that you should have no more than 4 drives per fast and wide SCSI-2 controller.
Standard SCSI channels can sustain a transfer rate of approximately 5 megabytes per second. This is one fourth of fast and wide SCSI-2.
In general, several controllers with disk drives distributed evenly over them will give better performance than a single controller.

Use disk striping

If your operating system supports disk striping, consider using it. Striping allows you to spread one or more files across several disks in a uniform manner. This can improve performance by balancing the activity on all your disks so that they are accessed approximately the same amount.

Use raw disks – maybe

A raw disk or raw partition is a contiguous section of disk space that does not have a filesystem on it. You can place any or all fixed length extents of a Progress database on raw disks. Using raw disks might improve performance by up to 20% in some circumstances, but they have many disadvantages. Some of them are:

  • Defining, keeping track of, and reconfiguring databases that use raw partitions is harder than for databases that use files.
  • Raw partitions are fixed size. You can’t change their size when you want to.
  • You usually can’t use the same tool to back up raw partitions as you use for filesystems. Instead, you must make abckup of the disk the partition is on, define new partitions and filesystems, and restore your backups.
  • The operating system does not know what is stored on a raw partition. It looks the same if it is empty or if it has a database extent on it or if it has five database extents on it.
  • Since raw partitions are accessed without using the system’s buffer pool, you will have to increase the size of the Progress buffer pool and decrease the size of the Unix buffer pool to get an equivalent amount of database page buffering. This could affect other applications’ performance.

See “Progress In the Raw” for more information.

Avoid IDE disk controllers

IDE disk controllers were designed for inexpensive single-user personal computers running some bogus software called DOS (Dog Operating System). These disk controllers transfer data from the controller to memory one byte at a time. This is maximally ungood. But it is why they are so cheap. The manufacturesrs should pay people to take them.
Unless you are a dog, don’t use them. If you have them in your system, perform the following steps immediately.

  • make an act of contrition,
  • make backups,
  • turn off the computer system,
  • open the case,
  • remove the thing,
  • throw it out the window,
  • go to the store and buy a reputable brand of SCSI disk controller,
  • don’t forget to buy new disks that work with SCSI controllers.

Avoid RAID 5 Configurations

In RAID 5 disk configurations, data are striped across several disks along with “parity” data. The parity data is distributed across the drives in such a way that a data block and its parity information are always written to different devices. This technique allows reconstruction of all data that was present on a drive that has failed. RAID 5 systems seem attractive because they are resilient to a single disk failure but cost much less than a fully mirrorred configuration.
Read performance can be quite good, but write performance will be terrible. This is because the parity data must be updated whenever a block is written. In the worst case, writing a single database block requires four i/o operations. The following operations are performed internally by the RAID 5 system:

  • Read the old data group
  • Read the old parity data
  • Merge the new database block into the old data group
  • Compute new parity data
  • Write the new data
  • Write the new parity data

“But since the data and parity are on separate drives, they can be read in parallel” you say. Yes that is true. But TANSTAAFL (There Ain’t No Such Thing As A Free Lunch). Reading two disks at the same time uses up half your disk bandwidth.
For more information about RAID configurations, see Raiders of the Lost Disk.

Block Sizes

Progress allows you to control the size of the database, before-image log, and after-image log files. You should increase all of them from their default values. For best performance, Progress block sizes should be the same size or a multiple of the operating system’s block size.

Set the database block size

The default block size for the database is 1024 bytes on most systems. You can specify another size when you create a database. By setting it to 8192 bytes (8 kilobytes), you can improve database i/o performance signigifcantly. This is because with larger block sizes, you get more bang for your buck – more data are transferred in each i/o operation. Also, because index compression works at the index block level, indexes will compress better with larger block sizes. Writing (or reading) 8 kilobytes takes very nearly the same amount of time as it does to write 1 kilobyte. You set the database block size while creating a database with the command

prostrct create mydb -blocksize 8192

Don’t forget to adjust the value of the buffer pool size (startup parameter -B) to account for the larger buffers. -B is specified as the number of buffers.
What is the best block size? It depends. It depends on your application and your data. In general, larger block sizes are probably better than small ones, but if you have many small records, you may end up using more disk space because no more than 64 records can be stored in an 8 k block and no more than 32 in the smaller sizes.

Set the before-image log block size

The default for the before-image log’s block size is the same as the database block size. Unless you are using 8 kilobyte database blocks, you should change the before-image log’s block size to 8 kilobytes. On most systems, this will give the highest I/O throughput. On some, 16 Kilobytes will give slightly better throughput, but the difference is usually small enough that it doesn’t really matter.
Set the before-image block size to 8 kilobytes. You do this by specifying the -biblocksize option while truncating the bi file. e.g.

proutil mydb -C truncate bi -biblocksize 8

Set the before-image log cluster size

Space in the before-image (bi) file is allocated in units called “clusters”. Whenever Progress fills a bi cluster, it performs an operation called a “checkpoint” to synchronize disk resident copy of the database with what is in memory. This is done to limit the amount of work required during crash recovery or restart and also to allow bi clusters to be reused when the data they contain is no longer needed.
Set the before-image cluster size to at least 1024 kilobytes. You do this by specifying the -bi option while truncating the bi file. e.g.

proutil mydb -C truncate bi -bi 1024

Note that when the bi file is initialized after you have truncated it, it will be expanded to 4 clusters. Make sure you have enough free disk space.
The benefit of increasing the cluster size is that page writers will have enough time do do the necessary i/o in the background. But you only need to make the clusters large enough so that the page writers can work effectively.
Disadvantages of increasing the cluster size are that restart and crash recovery will take longer, and when the bi file has to be expanded, it is expanded in larger chunks.
If you don’t use page writers, increasing the cluster size can cause long checkpoint completion times (2 minutes or more), especially if the buffer pool is large. These are observable as periods when no database update activity, transaction starts, or transaction ends can occur.
It is not unreasonable to set the cluster size to 1024 kilobytes or more, but sizes larger than 8192 kilobytes are probably overkill for most installations.

Set the after-image log block size

If you are using after-image journalling, change the after-image log’s block size. The default after-image log’s block size is the same as the database block size. Unless you are using 8 kilobyte database blocks, this is too small. Set the after-image log block size to 8 kilobytes. You do this with the command:

rfutil mydb -C truncate ai -aiblocksize 8192

Shared Memory

Use spinlocks

On multi-processor systems, you can use spinlocks to improve internal resource sharing among database processes. All shared resources must be locked while they are being used, typically for periods on the order of a few microseconds. Spinlocks are essentially loops that retry continuously when an attempt to lock a shared resource fails. After some number of retries, the process will go to sleep for a short time. This is termed a “latch timeout”. The number of retries before sleeping is controlled by the -spin parameter.
Tuning -spin essentially means increasing its value until the number of latch timeouts no longer decreases. Increasing -spin will also cause an increase in cpu consumption, so you have to stop increasing it when cpu consumption gets above 90%.
Start by setting -spin to about 5000. This should be a good starting point. On systems with only a few cpus (2 or 3), you may find that cpu utilization becomes excessive (over 90%). If so, try smaller values. If cpu utilization is less than 90%, you can increase -spin. Try 10000 or 15000. You can adjust -spin from promon while the database is running.

Processes

Progress provides several types of background processes that improve performance. You should use them. Remember to increase the number of users startup parameter (-n) to account for background processes.

Use page writers

Asynchronous page writers (apw’s) are background processes whose job is to write updated database blocks to disk as needed so that servers do not have to take the time to do these writes. This gives them more time to do useful work on behalf of clients. Among their virtues are:

  • Checkpoints take less time because there are fewer modified pages and the page writers help with the checkpoint operation.
  • A supply of unmodified buffers is available for servers to read database pages from disk. They don’t have to write dirty pages first.
  • The lru chain does not become clogged with dirty pages at the oldest end so search time is reduced.

Page writers are self-tuning. Although there are parameters that affect their operation, you should not use them. The default values have been shown to be correct. The choice of how many apw’s to start is the only thing you have to worry about and you choose that by starting with a small number (1 or 2). Then let the system run for awhile and look at the Checkpoint display in promon. It shows what happened during the last 8 checkpoints.
See if the number of buffers flushed (the rightmost column) is consistently 0 or close to zero. If it is, you have enough apw’s and they are keeping up with the load. If you see 1 digit numbers, you are close to the edge. If you see higher numbers, then start another apw to see if it is enough.
The buffers flushed column indicates if any buffers were NOT written in the background during the asynchronous checkpoint. When a checkpoint ends (which happens at the same time that a cluster fills), any buffers left on the checkpoint queue must be written immediately. NO database changes can occur until those writes have been completed. This is because there is no space to write additional bi notes until the next cluster is opened.
This can only happen if the apw’s cannot do all the scheduled writes. There are 4 major causes:

  • You are not using apw’s in the first place.
  • The bi cluster size is too small. Checkpoints might occur so close together that the apw’s don’t have enough time to do their work.
  • The disk subsystem can’t sustain the required i/o rate. For example, a database stored on a single disk is likely to suffer from this problem.
  • You don’t have enough apw’s running to perform the required writes.

Don’t use page writers for read-only databases. They have no modified pages.

Use the before-image log writer

The before-image log writer (biw) is a background process that writes filled before-image buffers to disk. Always use the before-image writer (biw). There are no tuning parameters for it. Unlike page writers, you only need one before-image writer.

Use the after-image log writer

The after-image log writer (aiw) is a background process that writes filled after image buffers to disk. If you are using after imaging journalling, use the after image writer. There are no tuning parameters for it. Unlike page writers, you only need one after-image writer.

Use the watchdog

Every process that connects to the database must make use of various shared resources in order to operate. Access to shared resources is regulated by a system of locks. When a process accesses shared data, it first locks them to gain exclusive access and releases the lock when the operation is done. If a process should be killed, it will not be able to release the locks it holds. No other process will be able to access the locked resource, but the lock holder cannot release the lock.
The watchdog’s job is to deal with such cases. It is a background process that periodically checks to see if another process has died or disappeared without disconnecting from the database. If it finds such a situation, it will assume the identity of the lost process, undo its current transaction if one exists, and release all its locked resources. While this almost always works successfully, it can fail on rare occasions if the missing process left the locked resources in an inconsistent state.

Run all Progress processes at the same priority

All Progress processes should have the same system scheduling priority. This is so because they share database resources. If a low priority process should lock a shared resource, higher priority processes will have to wait to access the resource. But the low priority process may not be able to finish using it and release it because the system will not schedule it due to its low priority.

Buffers

Tune the database buffer pool

The purpose of the database buffer pool is to cache soon-to-be-needed database pages (blocks) in (shared) memory to avoid disk i/o. The -B parameter determines the number of blocks that are kept in memory. When a Progress process wants to access a database block, it looks in the buffer pool to see if the block is there. If it is, then a disk read has been avoided and time saved. Progress uses the “least recently used” (lru) algorithm to predict the future to decide which blocks to keep in memory.
The default value for -B is 8 times the number of users (-n). If the buffer pool hit rate is below 90%, increase it. If the hit rate is above 95%, you probably have a large enough buffer pool, unless the number of database reads is high.
The optimum value is a function of the application, database size, number and speed of the disks and controllers and other factors. The default is probably wrong for everyone.
As a rule of thumb, a decent disk can sustain up to 50 random i/o operations per second or 100 sequential i/o operations. Some disks are faster, some slower. Regardless of the hit rate, if the total number of database reads and writes per second approaches 30 times the number of disks the database is stored on, increasing the size of the buffer pool can help.
As you increase -B, make sure that you don’t cause paging or swapping due to the increased shared memory area size. The buffer pool is by far the largest data structure used by the database manager. Don’t forget that you are specifying the number of buffers. The amount of memory required by the buffer pool is approximately (130 + database block size in bytes) * number of buffers.

Set the number of before-image log buffers

Set -bibufs to 15. If promon shows more than 5 % bi buffer waits, try setting -bibufs to 30. Values higher than 30 are not going to make any difference and will only waste memory.

Set the number of after-image log buffers.

Set -aibufs to twice the number of before-image log buffers if you are using after imaging.

Networking

Use TCP/IP, Avoid all others.

Do not use the spx/ipx protocols. ipx is very inefficient for client-server communications. The maximum message size is 512 bytes. Messages longer than that must be split up and sent as several messages. Receipt of each message fragment must be acknowledged be for the next one can be sent. This is very ungood. The tcp/ip protocol is much better and gives much better performance.
“But what about (insert favorite protocol name here) ?” you say. Well, perhaps ther are some good reasons for why you would use it. But this article is about performance. Use tcp/ip. The other protocols are all dead anyway.

Increase the network message buffer size

The -Mm parameter determines the size of the message buffers Progress uses for sending and receiving messages in client-server configurations. Using tcp/ip, it is much more efficient to send one 1,000 byte long message than it is to send ten 100 byte long messages.
The default value of the message buffer size is 1024 bytes. You should increase it to at least 4096. Use a value that is a multiple of 4096. This will allow the server to send much more data per network message when it can. When there are less data than a full buffer, shorter messages will be transmitted. For example, if the buffer size is 16384, the server can send any size message up to 16384 bytes without dividing into several fragments.

Use traceroute

The traceroute utility is a great help in determining how far tcp/ip messages have to travel to reach their destination and how long it takes for them to get there. traceroute (sometimes calledtracert) is a public domain or shareware utility and is available for all operating systems. Use it to find out if the path between client and server is longer than you expected. You may find your messages going through 4 routers you didn’t know about

Miscellaneous Topics

Keep records

“Good judgement comes from experience. Experience comes from bad judgement.”
“Experience is what you get when you don’t get what you want.”
“To predict the future, you must know the past.”

You should keep records of what you do, both what works and what doesn’t. Next year when the same problem occurs, you might not be able to remember what you did to solve it.
You can spot trends and make forecasts when you collect data over a long enough time. For example, as you add users, you can probably tell when you will have to add more memory to your system.
When you get promoted, the next person who gets your job won’t have to start over.

Use the -q option

Normally, Progress searches PROPATH directories when looking for a procedure to make sure that a newer version of the file will be used if one exists. This is desirable during development.
The -q option tells Progress to search PROPATH directories only on the first use of a procedure. After that if the procedure still resides in memory or in the local session-compiled file, Progress uses that version rather than searching the directories again. This reduces the overhead for finding a procedure.

Be willing to experiment

I know: You have a business to run, your system has to be up 36 hours a day, you’re busy, everybody else is busy. There are a million reasons for not experimenting. But…”Nothing ventured, nothing gained.”

Get expert help if you need it

If you don’t know how to solve a problem, find someone who does. There are many sources of assistance, including consultants who earn their living by helping Progress customers, Progress Software’s own consulting services, books, newgroups. and so on. Among them are:

  • Mr. John Campbell has published several useful and interesting Progress books. Among them are:
        “High Performance Coding: A Guide to Efficient Reports and Programs”
        “Making Good Progress”
      “Work Smarter, Not Harder”

    All of the above are available from:

      white star software, po box 51623, palo alto, ca 94303.

    Mr. Campbell’s telephone number is: 4158570686. He can also be reached via e-mail at [email protected] and on the web at www.wss.com

  • Mr. Dan Foreman‘s “Progress Performance Tuning Guide” is an excellent reference.Mr. Foreman can be reached by telephone at 7704499696 and via e-mail at [email protected] and on the web at www.usiatl.com.
  • RTFM: The Progress manuals
      “System Administration Guide”, “System Administration Reference”, and “Database Design Guide” will be useful.
  • The Internet offers a wealth of information. Check out the following links:
  • You can also get a wealth of information at the Progress User Conferences. Most conferences offer one or more sessions related to performance tuning. Members of the database development team are always at the conference to speak with customers and answer questions. Copies of the proceedings for past conferences can be obtained from Progress Software Corp., but not all issues are available.
  • A Performance Tuning Workshop is usually offered during or immediately after the annual user conferences. According to customers who have participated, it is well worth the extra money and time.
  • Progress Consulting Services can be reached at 6172804290
  • Progress Education Services can be reached at 8004776473.4452

http://www.fast4gl.com/downloads/monographs/tuning/tuning.html#misc

No Comments on Progress RDBMS Performance Tuning Tips
Categories: Cũ hơn

Không mở được Progress Explorer Tool trên Windows 2008 R2 November 19, 2014

Cài đặt Progress OpenEdge 10.2B lên Windows 2008 R2, mở cái Explore Tool lên không được, tự động tắt khi chạy.
Sau khi google, ta cũng tìm ra cách sửa, tuy nó không có phải là cách chính thống, nhưng cũng tạm thời giải quyết được vấn đề, nên cứ như thế mà làm thôi.
Sửa Registry của EnableJIT từ 1 thành 0 trong mục HKEY_CURRENT_USER/Software/Microsoft/Java VM
Sau đó Run as Administrator.
Rứa thôi

No Comments on Không mở được Progress Explorer Tool trên Windows 2008 R2
Categories: Uncategorized

How to fix Windows 2008 R2 BOOTMGR is missing

Mở cái VMware lên, bị lỗi này :
BOOTMGR IS MISSING
PRESS CTRL+ALT+DEL TO RESTART
Đúng nản :\ tìm cách fix thôi.
Nguyên nhân: Dự đoán là nó ko tìm thấy đúng file boot do active nhầm partition, hoặc do có nhiều hơn 1 primary partition được active.
Giải quyết: google một hồi thấy cách giải quyết.

  • Boot from DVD, and enter the recovery command prompt
  • Diskpart
  • List Disks
  • Select Disk 0
  • List Partitions (look for small partition possibly around 100MB ususlly partition 1)
  • Select Partition 1
  • active
  • exit
  • reboot

Chính nó đó, lên rồi.

No Comments on How to fix Windows 2008 R2 BOOTMGR is missing
Categories: Uncategorized

Can not delete Oracle oci.dll file November 11, 2014

Sau khi gỡ cài đặt oracle ra khỏi máy, muốn xóa luôn cái thư mục ORACLE_HOME, xóa mãi không được, báo lỗi không xóa được cái file oci.dll. Thử restart máy lại, xóa hết regestry, services oracle vẫn không ăn thua.
Giải quyết: Do nó đang được sử dụng bởi 1 chương trình khác : Distributed Transaction Coordinator
Stop thằng này trong Windows Services đi, sau đó xóa ORACLE_HOME như bình thường.
Done well!!!

No Comments on Can not delete Oracle oci.dll file

[QAD error] Unable to get value of the property 'getAttribute': object is null November 10, 2014

Lỗi:

Gặp lỗi này ghi mở Process Map Editor:
Unable to get value of the property ‘getAttribute’: object is null or undefined in Process Maps.
Process Maps give script error   Unable to get value of the property ‘getAttribute’: Object is null or undefined.
😐 lại nợ 1 tấm hình ở đây @@

Resolution:

QAD SVG plugin chưa được cài đặt, và không tìm thấy ở HomeServer, có thể lúc cài đặt Client, phần mềm diệt virus ngăn cản không cho cài plugin.

Download lại SVGView.exe và cài đặt lại theo link:

Cài lại SVG Plugin, hoặc cài lại luôn QAD .net UI clien, hoặc cài Adobe SVG viewer: Link http://www.adobe.com/devnet/svg/adobe-svg-viewer-download-area.html 

Nguyên nhân:

SVG viewer chưa được cài đặt khi cài QAD .NUI
Environment/Conditions:
IE9
.NET 2.9.4, 2.9.6
QAD SE 2013

No Comments on [QAD error] Unable to get value of the property 'getAttribute': object is null

Lỗi : "corflags : error CF001 : Could not open file for writing" when trying to modify a Controller EXE file November 5, 2014

Problem(Abstract)

Customer is trying to use ‘corflags.exe’ to modify a Controller executable (for example to solve problem in separate Technote 1508588). An error appears.

Symptom

corflags : error CF001 : Could not open file for writing

Cause

The operating system cannot modify the EXE file.

There are several possible causes for this:

  • Scenario #1 – Windows user running the command prompt does not have NTFS write access to the file
  • Scenario #2 – EXE file has the “read only” file flag ticked
  • Scenario #3 – Someone has got the EXE file (for example the program is running in Windows)

Resolving the problem

The solution varies depending on the cause:

  • Scenario #1 – Modify NTFS permission, or launch Command Prompt as a different (administrative) user
  • Scenario #2 – Right-click on the file, click “properties” and untick “read only” attribute.
  • Scenario #3 – Close the EXE file (currently running in Windows).

Nguồn : http://www-01.ibm.com/support/docview.wss?uid=swg21589922

No Comments on Lỗi : "corflags : error CF001 : Could not open file for writing" when trying to modify a Controller EXE file
Categories: Uncategorized