728x90

The other day I was transferring a large file using rsync to another system on my local area network. Since it is very big file, It took around 20 minutes to complete. I don’t want to wait that much longer, and also I don’t want to manually terminate the process by pressing CTRL+C either. I was just wondering if there could be any easy ways to run a command for a specific time and kill it automatically once the time is out in Unix-like operating systems – hence this post. Read on.

Run A Command For A Specific Time In Linux

We can do this in two methods.

Method 1 – Using “timeout” command

The most common method is using timeout command for this purpose. For those who don’t know, the timeout command will effectively limit the absolute execution time of a process. The timeout command is part of the GNU coreutils package, so it comes pre-installed in all GNU/Linux systems.

Let us say, you want to run a command for only a specific time, and then kill it automatically once the time is passed. To do so, we use:

$ timeout <time-limit-interval> <command>

For example, the following command will terminate after 10 seconds.

$ timeout 10s tail -f /var/log/pacman.log

You also don’t have to specify the suffix “s” for seconds. The following command is same as above.

$ timeout 10 tail -f /var/log/pacman.log

The other available suffixes are:

  • ‘m’ for minutes,
  • ‘h’ for hours
  • ‘d’ for days.

If you run this tail -f /var/log/pacman.log command, it will keep running until you manually end it by pressing CTRL+C. However, if you run it along with timeout command, it will be killed automatically after the given time interval.

Just in case if the command is still running even after the time out, you can send a kill signal like below.

$ timeout -k 20 10 tail -f /var/log/pacman.log

In this case, if you the tail command still running after 10 seconds, the timeout command will send it a kill signal after 20 seconds and end it.

The Tmeout command can be especially useful when troubleshooting hardware issues. For instance, run the following command to display all messages from the kernel ring buffer, but for only 10 seconds.

$ timeout 10 dmesg -w

For more details, check the man pages.

$ man timeout

Sometimes, a particular program might take long time to complete and end up freezing your system. In such cases, you can use this trick to end the process automatically after a particular time.


Also, consider using Cpulimit , a simple application to limit the CPU usage of a process. For more details, check the following link.


Method 2 – Using “Timelimit” program

The Timelimit utility executes a given command with the supplied arguments and terminates the spawned process after a given time with a given signal. First, it will pass the warning signal and then after timeout, it will send the kill signal.

Unlike the timeout utility, the Timelimit has more options. You can pass number of arguments such as killsig, warnsig, killtime, warntime etc.

It is available in the default repositories of Debian-based systems. So, you can install it using command:

$ sudo apt-get install timelimit

For Arch-based systems, it is available in the AUR. So, you can install it using any AUR helper programs such as PacaurPackerYayYaourt etc.

For other distributions, download the source from here and manually install it.

After installing Timelimit program, run the following command for a specific time, for example 10 seconds:

$ timelimit -t10 tail -f /var/log/pacman.log

If you run timelimit without any arguments, it will use the default values: warntime=3600 secondswarnsig=15killtime=120killsig=9.

For more details, refer the man pages and the project’s website given at the end of this guide.

$ man timelimit

And, that’s all for today. I hope this was useful. More good stuffs to come. Stay tuned!

Cheers!

728x90
0

I have written kernel module to measure the correctness of ndelay() kernel function.

#include <linux/module.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/time.h>
#include <linux/delay.h>
static int __init initialize(void)
{
    ktime_t start, end;
    s64 actual_time;
    int i;
    for(i=0;i<10;i++)
    {
        start = ktime_get();
            ndelay(100);            
        end = ktime_get();
        actual_time = ktime_to_ns(ktime_sub(end, start));
        printk("%lld\n",(long long)actual_time);    
    }
    return 0;
}

static void __exit final(void)
{
     printk(KERN_INFO "Unload module\n");
}

module_init(initialize);
module_exit(final);

MODULE_AUTHOR("Bhaskar");
MODULE_DESCRIPTION("delay of 100ns");
MODULE_LICENSE("GPL");

the dmesg output is like this:

[16603.805783] 514
[16603.805787] 350
[16603.805789] 373
[16603.805791] 323
[16603.805793] 362
[16603.805794] 320
[16603.805796] 331
[16603.805797] 312
[16603.805799] 304
[16603.805801] 350

I have gone through one of the posts in stackoverflow: Why udelay and ndelay is not accurate in linux kernel?

But I want a fine tuned nanosecond delay (probably in the range of 100-250ns) in kernel space. Can anyone please suggest me any alternative for doing this?

0

You can use

High resolution timers (or hrtimers)

    hrtimer_init
    hrtimer_start
    hrtimer_cancel

functions. An example is available here

0

If you are targeting x86 only system, you can use rdtsc() call to get the CPU clock counts. The rdtsc() api has very little overhead. But you do need to convert from CPU clock to the ns, it is dependent on how fast your CPU clock is running.

static unsigned long long rdtsc(void)
{
    unsigned int low, high;
    asm volatile("rdtsc" : "=a" (low), "=d" (high));
    return low | ((unsigned long long)high) << 32;
}

Otherwise you can use the kernel high resolution timers API.

  • The above code snippet returns the number of cycles or the time gap itself? – Bhaskar Jupudi Mar 31 '16 at 22:10
  • rdtsc return clock cycles only. It is fast, low overhead, but you do need to convert it base on how fast is your CPU and it is x86 only. – Jbobo Lee Apr 1 '16 at 0:35
  • Thanks for your comment. But I'm quite confused here. My CPU has 12 cores. Can you provide an example to convert the number of cycles that are obtained from rdtsc to actual time in nanoseconds? – Bhaskar JupudiApr 1 '16 at 1:18


728x90



Kernel 단위에서 System Call 과 함께 이용하면 좀 더 정밀한 ns 단위로 결과 측정이 가능하다. 물론 이 부분에서 insmod 와 rmmod 를 해줘야 한다는 단점이 있지만, 원하는 시간대에 ns 단위로 측정이 가능하다는 소리 자체가 엄청난 이득이기 때문이다.


#include <linux/module.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/time.h>
#include <linux/delay.h>
static int __init initialize(void)
{
    ktime_t start, end;
    s64 actual_time;
    int i;
    for(i=0;i<10;i++)
    {
        start = ktime_get();
            ndelay(100);            
        end = ktime_get();
        actual_time = ktime_to_ns(ktime_sub(end, start));
        printk("%lld\n",(long long)actual_time);    
    }
    return 0;
}

static void __exit final(void)
{
     printk(KERN_INFO "Unload module\n");
}

module_init(initialize);
module_exit(final);

MODULE_AUTHOR("Remoted");
MODULE_DESCRIPTION("delay of 100ns");
MODULE_LICENSE("GPL");


조금더 복잡한 이유는 일부 모듈이 환경변수 Path 에 존재하지 않는다는 점이다. Kernel 단계에서 컴파일 되는 영역이기 때문에, 여기서 필요한 라이브러리들을 불러다 쓰려면 Makefile 을 작성해야 하는 점이 있다.


You need to build the module within the Context of the Linux tree. By default, the compiler will look for user-space headers in /usr/include. There ARE some linux headers there (/usr/include/linux), but module.h is not one of them, as it deals with kernel constructs directly.

In short, you need a Makefile. Also, get rid of the #define MODULE. save the following to a file called Makefile and run make:

Code:
obj-m += foo.o
all:
		make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
clean:
		make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean

This triggers the kernel build system.

Given that you are using Ubuntu, you probably already have the kernel headers installed at /usr/src/linux-headers-$(uname -r). module.h lives here:

Code:
jameson@aqua:~$ ls /usr/src/linux-headers-$(uname -r)/include/linux/module.h
/usr/src/linux-headers-3.0.0-12-generic/include/linux/module.h


728x90



파일은 요아래에..

nano_sleep.c



#include <stdio.h>
#include <time.h>   /* Needed for struct timespec */


int nsleep(long miliseconds)
{
   struct timespec req, rem;

   if(miliseconds > 999)
   {   
        req.tv_sec = (int)(miliseconds / 1000);                            /* Must be Non-Negative */
        req.tv_nsec = (miliseconds - ((long)req.tv_sec * 1000)) * 1000000; /* Must be in range of 0 to 999999999 */
   }   
   else
   {   
        req.tv_sec = 0;                         /* Must be Non-Negative */
        req.tv_nsec = miliseconds * 1000000;    /* Must be in range of 0 to 999999999 */
   }   

   return nanosleep(&req , &rem);
}
 
int main(int argc, char **argv, char **arge) {
  struct timespec tps, tpe;
  //int ret = nsleep(1000);

	while(1){
 		if ((clock_gettime(CLOCK_REALTIME, &tps) != 0) || (clock_gettime(CLOCK_REALTIME, &tpe) != 0)) {
    	perror("clock_gettime");
    	return -1;
		}
	  //printf("sleep result %d\n",ret);
	  printf("[%lu.%lu] %lu ns\n", tps.tv_sec, tps.tv_nsec, tpe.tv_nsec-tps.tv_nsec);
  }
  return 0;
}


자 코드를 분석해보자. 사실상 clock_gettime 으로 접근가능한 실행시간 (Elapsed Time) 으로 접근하면 위의 스크린샷처럼 CPU에 의존성을 띄며 정확한 Nano Second 단위로 접근하여 Command 를 날리는 것이 불가능하다.


만약 핵융합과 핵분열 같이 정확한 타이밍을 필요로하는 문제가 있다면 어떻게 해야할까? 바로 고전시대의 라이브러리가 우리를 반겨준다. 다음 포스팅에서 살펴보자.




+ Recent posts