[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re:
Cloudmaster wrote:
>
> whoops, I did a grep filename >> filename, and the ensuing race condition
> cause a pretty big file to be produced. I think the easiest way to fix it
> would be to use some command that removes duplicate lines, but I don't know
> of such a command right off hand. Any suggestions?
>
> Thanks,
> Danny
>
> --
> To unsubscribe, send email to majordomo@luci.org with
> "unsubscribe luci-discuss" in the body.
uniq will do that, but it requires the file to be sorted first. So,
"sort filename | uniq > file.new". But then you'd have to manually
rearrange everything. That, or a little perl script to check for
duplicate lines. Let's see...
#!/usr/bin/perl
@lines = <>;
for($i=0; $i<=$#lines; $i++) {
$line = $lines[$i];
for($j=$i+1; $j<=$#lines; $j++) {
if ($line eq $lines[$j]) { $lines[$j] = "foo"; }
}
}
for($i=0; $i<=$#lines; $i++) {
if ($lines[$i] ne "foo") {
print $lines[$i];
}
}
Save that to say, reallyuniq.pl. Then, "perl reallyuniq.pl < filename >
filename.new". Be sure to NOT use "filename" twice there, as it will be
totally destroyed :). That oughta do it, unless you have lines that are
just "foo". Then just use any sort of phrase that isn't in any line.
--
To unsubscribe, send email to majordomo@luci.org with
"unsubscribe luci-discuss" in the body.
- References:
- No Subject
- From: Cloudmaster <sauer@cloudmaster.ml.org>