Sort by: Newest, Oldest, Most Relevant
@prologic (#37xr3ra) sounds about right. I tend to try to build my own before pulling in libs. learn more that way. I was looking at using it as a way to build my twt mirroring idea. and testing the lex parser with a wide ranging corpus to find edge cases. (the pgp signed feeds for one)

matched #hm6yihq score:11.2 Search by:
Search by 1 mentions:
Search by 1 tags:
(#37xr3ra) @prologic yeah it reads a seed file. I'm using mine. it scans for any mention links and then scans them recursively. it reads from http/s or gopher. i don't have much of a db yet.. it just writes to disk the feed and checks modified dates.. but I will add a db that has hashs/mentions/subjects and such.

matched #lmj4dfq score:11.2 Search by:
Search by 1 mentions:
Search by 1 tags:
@prologic (#37xr3ra) sounds about right. I tend to try to build my own before pulling in libs. learn more that way. I was looking at using it as a way to build my twt mirroring idea. and testing the lex parser with a wide ranging corpus to find edge cases. (the pgp signed feeds for one)

matched #n7dn5aq score:11.2 Search by:
Search by 1 mentions:
Search by 1 tags:
(#37xr3ra) @lyse @prologic very curious... i worked on a very similar track. i built a spider that will trace off any `follows = ` comments and mentions from other users and came up with: ``` twters: 744 total: 52073 ```

matched #n7ufceq score:11.2 Search by:
Search by 2 mentions:
Search by 1 tags:
@prologic @etux @xuu (#37xr3ra) Now I want to remove the "domain" restriction, add a rate-limit and _try_ to crawl as much of the Twtxt wider network as I can and see how deep it goes ๐Ÿค”

matched #oxipdgq score:11.2 Search by:
Search by 3 mentions:
Search by 1 tags:
This is twtxt search engine and crawler. Please contact Support if you have any questions, concerns or feedback!