0 votes
by (130 points)

Hello,

Is it supported or planned to be supported in future?

thanks.

Applies to: Rebex HTTPS
by (130 points)
I see it's not supported and it all totally lacks the required features to perform the ClientHello just as Google Chrome does.

I have even tried to do the HTTP request via TlsClientSocket and setting the supported Ciphers, but i see it's not even doing the Order correctly! Chrome ClientHello Cipher orders are TLS1.3 Ciphers at the start and then it's TLS1.2 but Rebex has TLS1.2 in the top.

We implemented ClientHello verification in our server-side and it just allow Chrome-like TLS Fingerprint and Rebex seemed to be very close for our .NET Client Application but it's not a doable yet.

Here is the Chrome TLS Fingerprint:
https://tlsfingerprint.io/id/8466c4390d4bc355

To have Rebex able to get the same Fingerprint it should be able to do the same Ciphers, Extensions, Supported Groups and Signature Algorithms at least, in the exact orders.

https://github.com/refraction-networking/utls
That uTLS is the perfect fit for our Client Application but we have no plans to code the client in Golang.

Will you consider this as a TODO?

Thanks.
by (144k points)

2 Answers

+1 vote
by (144k points)

GREASE is not a cipher suite or protocol. It is a mechanism intended to spot potential compatibility issues in the TLS protocol. It does so by announcing support for random extensions and ciphers in TLS packets that don't actually supposed to have any effect on the other side. In cases where they actually do have any effect, it breaks the connection, which indicates a bug in the other side's implementation.

The non-existent 0x0a0a cipher suite is just one of many IDs reserved for GREASE that won't ever have any meaning or cipher assigned to them, which makes them safe to use as part of the GREASE mechanism. Implementations advertising GREASE values can select them at random, so the 0x0a0a suite might only be present in some percentage of relevant TLS messages.

We are currently not aware of any compatibility issues in Rebex TLS implementation with clients/servers that apply GREASE to their TLS communication, and if any such issue arises, we would promptly fix it.

So the purpose behind GREASE is simply to uncover bugs in other TLS implementations as soon as possible, even while actually increasing the likelihood if failures in the meantime. Is this why you are interested in this, or do you have other motivation?

We don't currently have plans to add an option to apply GREASE to Rebex TLS communication, although we would consider it if it turns out there is demand for such feature. We could also add this as on custom-development basis.

by (130 points)
As we used TLS Fingerprinting to detect BOTs, many other have most probably used it to do the same, an example is
https://blogs.akamai.com/sitr/2019/05/bots-tampering-with-tls-to-avoid-detection.html

So this is a must for our sites which need to have a Client application to pass that so we can keep the detection on while our client also working fine.
There is demand then.

Thanks anyway.
by (144k points)
0 votes
by (144k points)

If I understand your requirements correctly, the use case is actually to circumvent your own site's detection of non-browser client applications by making the HTTPS client appear to be a browser. Or what uTLS calls "parroting".

Unfortunately, this is not really a scenario we aim for. Rebex HTTPS is not a browser simulator designed to circumvent bot-detection. Bot-detection routines and browsers are constantly evolving, and we don't wish to commit to constantly updating and tweaking our implementation to make it more browser-like just for the sake of circumventing bot detection. There are also those inconvenient caveats, which mean that even a solution that currently works would be at a constant risk of sudden failure due to server-side TLS upgrades.

If our customers intend to use Rebex HTTPS to access their own sites, we would assume they would configure their sites in such a way that makes it possible for their own bots to actually access them.

Likewise, we would expect third-party sites that do allow third-party bots to access their API to not apply access control based on bot-detection to the relevant endpoints.

The remaining category are bots accessing third-party sites that actively prevent bots from accessing them. In that case, masquerading a non-browser bot as a browser would actually violate their Terms of Service, and that is not a scenario we would like to be engaged in.

We might consider adding some uTLS-like features depending on demand. But "parroting" looks like potential support nightmare due to its caveats, and we would only add some form of it for a legitimate purpose and on a custom-development basis with a clearly-defined support and maintenance contract for the extra functionality.

...