0
how to read the 150000 data from a txt file in a short period of time. I am now using getline and it costs about 23mins
4 Respuestas
+ 2
What type of data are you reading and are you actually using all of the data that gets pulled or just particular data from the set?
+ 1
Line-by-line triggers a 'too much overhead' thought for me. I've seen faster (by orders of magnitude) reading large blocks (ideally a multiple of disk cluster size), then parse for delimiters in-memory.
@Netkos is right to ask about the file + what you're doing though; my first response just comes from disliking line I/O. You could even have disk fragmentation, a busy system, a logic issue or flaky drive.
To focus on code, can you show some in CodePlayground and maybe a couple data lines?
0
something like surname, given name, telephone number  and they are all string. I am now using getline(cin, input)
0
while (!myfile.eof()) {
              //     GET IT
              getline(myfile, title, '\t');
              getline(myfile, surname, '\t');
              getline(myfile, givenName, '\t');
                                getline(myfile, phoneNumber, '\t');
              getline(myfile, emailAddress, '\t');
              getline(myfile, occupation, '\t');
              getline(myfile, company);     
                                char* temp = new char[phoneNumber.length() + 1];
              char* temp_2 = new char[dayOfBirth.length() + 1];
              strcpy(temp, phoneNumber.c_str());
              strcpy(temp_2, dayOfBirth.c_str());
              count = Hash_1.get_Key1(phoneNumber, dayOfBirth);
}
Hash_1.getKey1 is a function for generating keys



