Closed Thread Icon

Topic awaiting preservation: searching using JavaScript? (Page 1 of 1) Pages that link to <a href="https://ozoneasylum.com/backlink?for=25255" title="Pages that link to Topic awaiting preservation: searching using JavaScript? (Page 1 of 1)" rel="nofollow" >Topic awaiting preservation: searching using JavaScript? <span class="small">(Page 1 of 1)</span>\

 
hecster2k
Nervous Wreck (II) Inmate

From: sj, ca, usa
Insane since: Feb 2002

posted posted 03-13-2005 18:08

greetings inmates,

I am trying to create a javascript-based search engine to search through the help pages of my site. I can't figure out how to tell javascript to crawl around on my site and return the pages that have content that matches my search string. I have resorted to creating arrays and having to populate the elements according to searchwords, titles, and urls... is there no better way to do this?

thanks!

"there has to be a solution."

BillyRayPreachersSon
Bipolar (III) Inmate

From: London
Insane since: Jul 2004

posted posted 03-13-2005 18:36

Do you mean client-side JavaScript, or server-side JavaScript?

If client-side, then in IE-only, you could use the FileSystemObject ActiveX control to search files.

If server-side, you could use the same object, but as it is server-side, it would work for any browser.

Hope this helps,
Dan

Tyberius Prime
Paranoid (IV) Mad Scientist with Finglongers

From: Germany
Insane since: Sep 2001

posted posted 03-13-2005 19:24

anyhow, having the client download your pages before searching through them is going to be so slow, it's not even going to be funny.

What's wrong with a standard server-side php/perl/cgi/htdig approach?

hecster2k
Nervous Wreck (II) Inmate

From: sj, ca, usa
Insane since: Feb 2002

posted posted 03-14-2005 06:15

I could always use another language, but in the interest of this project and there was a need for it to be very simple, very easily manageable, and usable without the need to install any application or web servers to get it going. that, and i don't have the technical know-how to do it in perl/cgi.

So i figured i could just do it the easier way and build a client-side search engine using javascript. Never thought about using server-side javascript. that's a good approach as well. Maybe I could look into that more.

Thanks for the input guys. keep it coming!

"there has to be a solution."

poi
Paranoid (IV) Inmate

From: France
Insane since: Jun 2002

posted posted 03-14-2005 14:42

I definitely think that ServerSide is the way to go. You could easily list all the HTM files, open them, strip the tags and see if the keywords matches the content. Obviously you could implement a kind of cache to read & strip_tag the pages only if they have been updated since the last seach.

kudos
Nervous Wreck (II) Inmate

From:
Insane since: Dec 2004

posted posted 03-14-2005 16:36

I recall that I made a script (long time ago) that scanned through the .htm files, and builded a tree,which was saved like php datastructure. (for easy access).
Anyway, what is wrong with a javascript client side search? One could do the same approach that I did, and
build a tree as a javascript datastructure, apart from that the filesize would get pretty big.

-kudos

Hebedee
Paranoid (IV) Inmate

From: Maryland, USA
Insane since: Jan 2001

posted posted 03-15-2005 03:02

You could link them to Google for your site.

Iron Wallaby
Paranoid (IV) Inmate

From: USA
Insane since: May 2004

posted posted 03-15-2005 21:18

Client side is A) SLOW and B) Redundant. This means that each user has to record all the data of your site, each time they search, in addition to the inherant slowness of JS, which is a LOT of processor cycles, as opposed to having the server do it once and then giving all the data to the users.

It's just much more efficient, in terms of CPU and bandwidth, to do it on the server. Not to mention, much much easier, since you have to write much less code (since with JS you need to write code to handle an artificial file tree, and so on... whereas with the server it's already there for you... you could go and call grep and be done with it).

---
"Consider a simple room with only four walls, a ceiling and a floor. Can you see it in your mind?s eye? You better not be able to; I haven?t specified a light source yet." - Paul Nettle

poi
Paranoid (IV) Inmate

From: France
Insane since: Jun 2002

posted posted 03-15-2005 22:02

It's not really that JS is slow : if it really were we couldn't do a lot of the stuffs we do for the 20 lines JavaScript Contests. But it is redundant. Why would you send all the pages of your site to the client while they are stored on your server and that server side script can do a search on them fairly easily and quickly ?

Doing the search on client side may work for a tiny site with 10 pages maximum but it's a no-no for anything bigger.

kudos
Nervous Wreck (II) Inmate

From:
Insane since: Dec 2004

posted posted 03-16-2005 13:13

hmmm...It might be slow, but I pretty sure that It will work for more than 10 pages, but to prove it I guess I have to implement it ,but my supervisor have instructed me to not do ANY recreational programming before my master thesis is finished

-kudos

poi
Paranoid (IV) Inmate

From: France
Insane since: Jun 2002

posted posted 03-16-2005 14:29

Indeed the execution time to search some keywords through say 50 pages in JavaScript might be reasonable. But the time to send these 50 pages to the client is everything but reasonnable.

hecster2k
Nervous Wreck (II) Inmate

From: sj, ca, usa
Insane since: Feb 2002

posted posted 03-17-2005 00:56

The client side solution i am proposing basically just has the client build up an array of filenames and keywords. Then the user just needs to compare against the array of keywords, then return the filename associated with it. Then there is no need to crawl through the pages themselves. So far, it's pretty fast. I don't believe that this engine will slow down dramatically when comparing through, say, an array of 100 elements.

of course, implementing a database and storing all the page data in there would make life so much easier. you would just need to use its callback functions. in the end you would end up with a database table, a page that makes the call to the db, rather than tons of html pages. oh well, i will let them run with this help system for now, and if they find it to be inadequate they can pay me to redesign it.

"there has to be a solution."

Iron Wallaby
Paranoid (IV) Inmate

From: USA
Insane since: May 2004

posted posted 03-17-2005 07:44
quote:
poi said:

It's not really that JS is slow


It is compared to dedicated database software, or to UNIX's vanilla C.

---
"Consider a simple room with only four walls, a ceiling and a floor. Can you see it in your mind?s eye? You better not be able to; I haven?t specified a light source yet." - Paul Nettle

bitdamaged
Maniac (V) Mad Scientist

From: 100101010011 <-- right about here
Insane since: Mar 2000

posted posted 03-18-2005 19:31
quote:
The client side solution i am proposing basically just has the client build up an array of filenames and keywords. Then the user just needs to compare against the array of keywords, then return the filename associated with it. Then there is no need to crawl through the pages themselves.



... The point is, how are you getting they keywords and filenames without crawling the pages.



.:[ Never resist a perfect moment ]:.

« BackwardsOnwards »

Show Forum Drop Down Menu