[SAC-HELP] question about "do file wild *SAC"

Arthur Snoke snoke at vt.edu
Tue Jan 26 16:04:59 PST 2010


In the HISTORY file that accompanies the version of SAC you are using, it 
says ...


     - Known Bugs/deficiencies
        - SSS traveltimes command (endian issue)
        - do file wild  segmentation fault with a large number of files

This bug will be corrected in the next release.

When I first found this bug some years ago, I found it was not the number 
of files but the number of cumulative characters in the filespecs.  The 
way we have gotten around it for now is to break up the list into several 
pieces:
do file wild [A-J]*bhz
   enddo
do file wild [K-N]*bhz
   enddo

I think this has been discussed before in sac-help exchanges.

On Tue, 26 Jan 2010, Igor Stubailo wrote:

> This is an update to my original message about "do file wild *SAC" problem.
> Katie Boyle helped me to figure out what was happening.
> 
> So, the problem is with the sac memory limit. The  script works fine with a
> few files but when I increase the number of them to 150, sac fails with the
> segfault. Apparently, "do file wild *SAC" wants to fit all *SAC data into
> the buffer and cannot. 
> 
> The data files are 1 sps, 2 hour long LHZs. Each file is 29K (29436) long.
> 
> My sac version is the latest and greatest ver. 101.3b.
> Is there a way to increase the total data size limit that can be used in
> sac?
> 
> The workaround in my case is to read the sac files one by one through the
> loop in the tshell outside sac.
> 
> Thanks,
> Igor
> 
>


More information about the sac-help mailing list