且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

如何在C中实现Timer

更新时间:2022-04-02 01:58:43

UG写道:
UG wrote:

我只是想知道C中是否存在任何定时器功能,因为它是在K& R 2中未提及的
,或者在ISO草案中。通过计时器功能我的意思是

当我们使用标准输入函数如scanf()或getch()或

任何其他函数时,界面停止从用户输入但是

如果用户几小时没有输入,该程序仍将是等待的
。有没有办法绕过scanf()(或任何其他

输入函数),这样它需要一些默认输入或

没有输入,只是继续执行的下一行假设

从用户输入为零,并采取相应行动。
I just wanted to know whether any timer facility exists in C, as it is
not mentioned in K&R 2, or in the ISO Draft. By timer function i mean
that when we use standard input function like scanf() or getch() or
any other function, the interface stops to take input from user but
what if user doesn''t give input for hours, the program will still be
waiting. Is there any way to circumvent the scanf() (or any other
input function for that matter) so that it takes some default input or
no input and just proceeds to the next line of the execution assuming
input from user as nothing and acts accordingly.



不在标准C中,但它可能是在你的目标平台上,所以

尝试向团队询问该环境。


-

Ian Collins。

Not in standard C, but it may be possible on your target platform, so
try asking on a group for that environment.

--
Ian Collins.


UG写道:
UG wrote:

我只是想知道C中是否存在任何计时器工具,因为它是

在K& R 2或ISO草案中未提及。
I just wanted to know whether any timer facility exists in C, as it is
not mentioned in K&R 2, or in the ISO Draft.



No.

No.


< snip>
<snip>


有没有办法规避scanf()(或任何其他

输入函数),以便它需要一些默认输入或

没有输入,只是进入下一行的执行,假设

输入来自用户,并且相应地采取行动。
Is there any way to circumvent the scanf() (or any other
input function for that matter) so that it takes some default input or
no input and just proceeds to the next line of the execution assuming
input from user as nothing and acts accordingly.



我假设它在超时时使用默认值。仍然,答案

是否定的。


您可能想要查看线程或进程,但没有

它们是这里有热门话题。


-

Ioan - Ciprian Tandau

tandau _at_ freeshell _dot_ org(希望它不是太好了)晚了)

(......它仍然可以......)

I assume it uses the default value on time out. Still, the answer
is no.

You may want to look into either threads or processes but none of
them is topical here.

--
Ioan - Ciprian Tandau
tandau _at_ freeshell _dot_ org (hope it''s not too late)
(... and that it still works...)


2月27日晚上9:15,UG ; < unmeshgh ... @ gmail.comwrote:
On Feb 27, 9:15 pm, "UG" <unmeshgh...@gmail.comwrote:

我只想知道C中是否存在任何计时器工具,因为它是

在K& R 2或ISO草案中未提及。通过计时器功能我的意思是

当我们使用标准输入函数如scanf()或getch()或

任何其他函数时,界面停止从用户输入但是

如果用户几小时没有输入,该程序仍将是等待的
。有没有办法绕过scanf()(或任何其他

输入函数),这样它需要一些默认输入或

没有输入,只是继续执行的下一行假定从用户输入
为空,并相应地采取行动。
I just wanted to know whether any timer facility exists in C, as it is
not mentioned in K&R 2, or in the ISO Draft. By timer function i mean
that when we use standard input function like scanf() or getch() or
any other function, the interface stops to take input from user but
what if user doesn''t give input for hours, the program will still be
waiting. Is there any way to circumvent the scanf() (or any other
input function for that matter) so that it takes some default input or
no input and just proceeds to the next line of the execution assuming
input from user as nothing and acts accordingly.


>来自C-FAQ:
>From the C-FAQ:



19.37:我如何实施延迟,或用户回复的时间,

sub-

秒的分辨率?


答:不幸的是,没有便携的方式。 V7 Unix,以及派生的

系统,提供了一个非常有用的ftime()函数,其分辨率为

,但是它已经从
$ b $中消失了b System V和POSIX。您可能在

系统上寻找的其他例程包括clock(),delay(),gettimeofday(),msleep(),

nap(),napms(), nanosleep(),setitimer(),sleep(),times()和

usleep()。 (但是,一个名为wait()的函数至少在

Unix *中*不是你想要的。)select()和poll()调用(如果

可以投入使用以实现简单的

延迟。在MS-DOS机器上,可以重新编程

系统定时器和定时器中断。


其中只有clock()是ANSI的一部分标准。如果CLOCKS_PER_SEC

大于时间,则两次调用clock()之间的
差异会导致执行时间过长,甚至可能具有亚秒级分辨率。 1.但是,clock()给出了当前程序使用的经过处理器时间

,这在多任务系统上可能与实时有很大差异。


如果您正在尝试实施延迟并且所有可用的时间

是一个时间报告功能,您可以实施CPU密集型

busy-wait,但这只是单用户,单一$ b / b
任务机器的一个选项,因为它对任何其他

进程非常反社会。在多任务操作系统下,一定要使用一个让你的进程在一段时间内处于睡眠状态的呼叫,

,例如sleep()或select(),或者暂停()与

alarm()或setitimer()一起使用。


对于真正短暂的延迟,使用无操作循环很有吸引力

喜欢


long int i;

for(i = 0; i< 1000000; i ++)

;


但是如果可能的话,抵制这种诱惑!首先,

你的精心计算的延迟循环将在下个月更快的处理器出现时停止正常工作

。或许更糟糕的是,一个聪明的编译器可能会注意到循环没有做任何事情和

完全优化它。


参考文献:H& ; S Sec。 18.1页,第398-9页; PCS Sec。 12页.197-8,215-

6; POSIX秒4.5.2。

19.37: How can I implement a delay, or time a user''s response, with
sub-
second resolution?

A: Unfortunately, there is no portable way. V7 Unix, and derived
systems, provided a fairly useful ftime() function with
resolution up to a millisecond, but it has disappeared from
System V and POSIX. Other routines you might look for on your
system include clock(), delay(), gettimeofday(), msleep(),
nap(), napms(), nanosleep(), setitimer(), sleep(), times(), and
usleep(). (A function called wait(), however, is at least under
Unix *not* what you want.) The select() and poll() calls (if
available) can be pressed into service to implement simple
delays. On MS-DOS machines, it is possible to reprogram the
system timer and timer interrupts.

Of these, only clock() is part of the ANSI Standard. The
difference between two calls to clock() gives elapsed execution
time, and may even have subsecond resolution, if CLOCKS_PER_SEC
is greater than 1. However, clock() gives elapsed processor time
used by the current program, which on a multitasking system may
differ considerably from real time.

If you''re trying to implement a delay and all you have available
is a time-reporting function, you can implement a CPU-intensive
busy-wait, but this is only an option on a single-user, single-
tasking machine as it is terribly antisocial to any other
processes. Under a multitasking operating system, be sure to
use a call which puts your process to sleep for the duration,
such as sleep() or select(), or pause() in conjunction with
alarm() or setitimer().

For really brief delays, it''s tempting to use a do-nothing loop
like

long int i;
for(i = 0; i < 1000000; i++)
;

but resist this temptation if at all possible! For one thing,
your carefully-calculated delay loops will stop working properly
next month when a faster processor comes out. Perhaps worse, a
clever compiler may notice that the loop does nothing and
optimize it away completely.

References: H&S Sec. 18.1 pp. 398-9; PCS Sec. 12 pp. 197-8,215-
6; POSIX Sec. 4.5.2.