Artificial intelligent assistant

Logline nanosecond to microsecond conversion I am loading a data into ElasticSearch and later visualizing it in Kibana. However, the Kibana does not support nanosecond precision. As my file contains 900,000 lines, how do I iterate over it to accomplish following: I want to preprocess the file in bash. **INPUT file1.csv:** > **2018-10-1711:54:59.4422378** ,OUTLOOK.EXE,12052,11316,Thread Profiling,Thread 11316,SUCCESS,User Time: 0.0000000; Kernel Time: 0.0000000; Context Switches: 3,company\username,0 **EXPECTED OUTPUT file2.csv:** > **2018-10-1711:54:59.44** 2,OUTLOOK.EXE,12052,11316,Thread Profiling,Thread 11316,SUCCESS,User Time: 0.0000000; Kernel Time: 0.0000000; Context Switches: 3,company\username,0 How to round up the date to 3 digits? I want to round up base on the 4th digit.eg. 4425000 = 442, 4426000 = 443

awk -F, 'BEGIN { OFS=FS=","; }
{
seconds=substr($1, index($1, ".")-2, 10);
ms=substr(seconds, 7);
seconds=substr(seconds, 1, 6);
if (ms > 5000)
seconds += 0.001;
$1=sprintf("%s%6.3f", substr($1, 1, index($1, ".") - 2), seconds);
print
}' < input


This simply brute-forces the timestamp fields out of the first parameter then checks to see whether the time should be rounded up or not. With the new time value in hand, it reassembles the timestamp field back into `$1` and then prints the new line.

xcX3v84RxoQ-4GxG32940ukFUIEgYdPy 66aa9a84a262fea43e7496c707ec27dc