The Web Design Group

... Making the Web accessible to all.

Welcome Guest ( Log In | Register )

 
Reply to this topicStart new topic
> My HTML update ideas, I will post every few points
bhblue-2
post Jun 22 2024, 10:54 PM
Post #1





Group: Members
Posts: 2
Joined: 22-June 24
Member No.: 29,196



Hello,

Recently I tried to re-think a whole HTML language. Today I provide you my first thoughts. What do you think about them?

QUOTE("MY HTML UPDATE IDEAS BY BHBLUEBERRY")
Short description
Here I will propose some of my ideas to (and how) update HTML-5 language making it easier and even more universal.
README
I will start from the standardized basics trying to make HTML language more intuitive, simple or functional by proposing new rules of composition.

This is the basic, standardized HTML script from official W3 website (https://html.spec.whatwg.org/multipage/, that makes the file considerable as (not simplest tough - if I'm not wrong we can write starting from body part and it still will be compiled)) HTML document.

CODE
<!DOCTYPE html>
<html lang="en">
<head>
  <title>Sample page</title>
</head>
<body>
  <h1>Sample page</h1>
  <p>This is a <a href="demo.html">simple</a> sample.</p>
  <!-- this is a comment -->
</body>
</html>


My basic theory will allocate the new versions in the specified commentary sections of this syntax. I will post my new ideas as soon as I will write them correctly. and understandable.

WARNING -- IMPORTANT
I am going to make my theories according to the usage of the online, publicly aviable HTMLcode->HTMLsiteview translator and I cannot be sure if every translator is written in the exact same way - not making my ideas working because I am trying to use some of the language rules that sometimes could be misinterpreted and no one will notice it (sometimes we can take the misinterpretations as the good rules especially when using this way written compilator).

Link Source Library:

Standard HTML rules in PDF https://html.spec.whatwg.org/print.pdf
Standard HTML rules website https://html.spec.whatwg.org/multipage/
HTML standard creator website https://www.w3.org/
Status updates
@bhblberry
bhblberry
updated
in 6 minutes
First steps
So we need to somehow make this update possible to implement into yet existing structure. It is necessary for our update to be back-compatible with older versions of language.
My idea is to use commentary section for including new formulas. We shouldn't allow to use any part of html document for this, but rather divide it into few parts. When we look at HTML document it looks plus minus like this:

CODE
<!DOCTYPE html>
<html lang="en">
<head>
<title>Sample page</title>
</head>
<body>
<h1>Sample page</h1>
<p>This is a <a href="demo.html">simple</a> sample.</p>
  <!-- this is a comment -->
</body>
</html>


According to the possibilities, there are now few methods to achieve our goal. Anyway we have to firstly use the commentary syntax to give browser a information that it should consider compiling our page with and how updated language version. Altough our document starts with
CODE
<!Doctype="">
section, it may be not necessary to compile site without it. Many compilers can show site, where it starts from body section. We should be professional and elegant, when making new language rules but in first place we have to make our rules in the way - they will make maximum percentage of sites compatible and runable.

Considering it all we should put our first info in the first line of the code and eventually search for it in the max 3-5 lines of the code, considering the fact, that the HTML code may be implemented somewhere else in the different code and this implementation may sometimes make our 1 line second etc, it is also possible that the iteration of our site will make our browser pre-compile the site with adding some code (sometimes the browser can make not iframed site change at it was) - we should also take it for consideration because backward-compatible means also compatible with older/other browsers. Altough there are too many versions for single organisation to look out, there are rather not many producers of them and we should only send them request about such a methods to make sure they will read and write everything correctly. It may look not such important, but today we often have to manage situations, where someone is implementing older versions of software into different devices with older parts that are having problems to present or process the latest standard content. There are also many positive search-robots that should not have problems with searching necessary data because of our changes so there is also possibility to make possible for developers to reverse the code and, when using new one - put the older version inside not taking any disadvantages from updating the site.

First line:
CODE
<!--!?A1!&!?A2!-!?UnifiedCodeSymbol!-!?+.jpg+.avi+.etc!-xxxxx!?-->
<!--=-->


Where XXXXX would be the last consious implementation of new HTML version.

OLD CODE

CODE
<!--!?A1!&0!?
<!--newinstructions-->
<!--=-->


OLD CODE

CODE
<!--!?A2!&1!?
<!--newinstructions-->
<!--=-->


OLD STILL USED PART (&1 AFTER A2) OF CODE

CODE
WHERE:
&0 - READ/WRITE ONLY PREVIOUS ARGUMENTS
&1 - READ/WRITE PREVIOUS ARGS AND REST OF CODE IN OLDER VERSION DOWNWARDS
&2 - READ/WRITE PREVIOUS ARGS BUT LOOK FOR ANOTHER <!-- FOR THE REST OF ARGS BECAUSE THERE MAY BE REASON TO MAKE LATER DEFINITIONS OF ARGS AND NEXT &X CODES SHOULD BE FOR THE MULTIPLE USAGE OF ARGS IN DIFFERENT PLACES SO THE COMPILER WOULD KNOW IF THEY ARE UPPER OR DOWNWARDS NOT TO STORE THEM UNTIL LOADING THE WHOLE PAGE. IT SHOULD BE CONSIDERED BECAUSE OF THE CMS'S WHERE SOMEBODY WOULD MAKE NEW DEFINITION AND IT SHOULD BE ALSO RESPECTED. THERE IS OPTION TO DEFINE IT LOCALLY BUT IT RATHER BE SOME LETTER COMPOSITION TO DEFINE NEW LOCAL OR NOT ARGS WHEN NOT KNOWING OTHERS THAT ARE ALREADY IMPLEMENTED (FOR OTHER WRITERS WITHOUT ADMIN PRIVILAGES). PLACING CODING-METHOD IN 1 LINE IS A GOOD IDEA BECAUSE OF THE SPECIAL SYMBOLIC CODINGS THAT MAY BE USED IN DIFFERENT LANGUAGES EVEN BY MISTAKE RUINING FOR EXAMPLE OUR NEW SYNTAX IN SOME KEY PLACES (SOMETHING LIKE IN APPLE.COM CASE).



2. I hope we can imagine what we can do with all this to this moment. We could for example use new code, leaving long text part in the older form, not having to enlarge the page filesize. When we write the code, we should also think about possibility, that we could after some time - write translator/s that will automatically rewrite the old code into new one maybe even without admin's help.
Furthermore, I think, that it would be a good idea to morph few different languages like PHP. JavaScript, Flash, CSS, BBcode and maybe even more into one new - even not to make each of them obsolete, rather to take maximum of what they offer - to provide developers maximum functionality from classic one-file-per-page point of view.

3. We should think about one thing - 1MB of data seems not important, but when we take this 1MB and make 1 024 000 people to download it - there is 1GB more data to store, transfer, read, write - simply to process. I remember that in the earlier years of Internet - optimalization was one of the most important things everyone had to take for consideration while creating his site. There was a lot of software added to computer oriented magazines to loselessly or not - compress images and other data, even to optimize code itself. We all appreciated it, because our connections were far less effective and they had percent or even promile of today's bandwich. Yet there is one thing - no one can still ignore. This thing is request-recieve time, known as PING. There are a lot of ways to improve it, but it is not possible without taking some weight from the main course. Not overload servers - have more power to calculate, redirect, send, recieve, process, even try to predict what and where they are currently sending. It is important, because when the server is consious that - this what he is recieving is PING-priority data like this from games or banks/financial facilities - he can send them using optimal road - often not wider but faster and he don't have to glue them (later unglue/unpackage) to the other data because of overload - what practice is critical for these kinds of data traffic. Other argument for data optimalization is as I wrote earlier - production of low-end computer-alikes that somentimes have Internet connection and their users would be glad if they could in the basic meaning surf it (Even now I've seen a review of 1GHZ, 256MB RAM matchbox-size unit). Now I will try to propose some functionalities of new code I think about.

4. IDEAS
a) New image/soundless animation format (ev. updating the existing one). This format should have this one property and be easily re-scaled (size, ratio, quality0 so the server could send specific versions to different clients. There should be basic quality for every img (about 70-80%) and little plus/minus half-transparent-fading buttons to request better quality or worse if the img wouldn't load or load very slow (even in browsers - this kind of button for all img's on specified site). This could be easily obtained by making two files - one with all the shapes and one with color information. we could send the 1/2 pixels from first and 1/2 color info from second etc. The other half of info would be predicted by the reciever and would have moderate quality - depending on how much computing power it would have to backwards-upscale it.
For this format to work properly - there should be option to add special data for every img - this data would determine the best scaling options for different situations. It could be obtained automatically by testing different options and descaling(etc) them and then re-resize them to original state, finding the differences between each methods. The descaling and resizing would be performed on the same ratios - to the same ratios so there should be no difference between the re-resizing step. There could be also an option to manually do this and for example mask different objects marking them from most to less important (this way when descaling - delete them not squishing everything) or to not change their ratio (main reason for quality loss), instead making them smaller and filling the necessary space with selected or defined sort of pattern. This way - there will be no site-resizing-reson-glitches/imperfections on any devices. Also, this way we could obtain the pre-loading feature with all images compilation to firstly - basic quality and then to higher quality, when sending rest of data (not the same, higher quality image again). It should be important task to do for example for google and other search engines with their search-img functionality. One search could mean - MB's less data to transfer, knowing that people would request HQ of the img's they are interested in from the first look, not mentioning the much higher whole page loading time (even eventually - presenting more results at once), also AI's when looking for aproppiate img's for their actions - could perform faster/better especially when they could choose the best data only by dl of the shape-data. Today a lot of software have also - automatic color/brightness etc functions and they perform in real time even with videos so maybe there would be an option to send 2/3 color sheets and still obtain the right picture (rather by sending 2/3 of each RGB sheet with the most white and most black pixels).

How to send RGB sheets:
Imagine our picture as the pixel matrix (16 BY 9):
CODE
1 X X X X X X X X X X X X X X X X
2 X X X X X X X X X X X X X X X X
3 X X X X X X X X X X X X X X X X
4 X X X X X X X X X X X X X X X X
5 X X X X X X X X X X X X X X X X
6 X X X X X X X X X X X X X X X X
7 X X X X X X X X X X X X X X X X
8 X X X X X X X X X X X X X X X X
9 X X X X X X X X X X X X X X X X
X 1 2 3 4 5 6 7 8 9 A B C D E F 0

We can now define four standard patterns for our frame-coding - for the two bit code (00,01,10,11)
There is a difference between how to achieve the best quality in img's and vid's. Lets focus on img first. Here we have single frame built from pixels. We should make our sheets using 2/3 of presented data, so in other way we should choose 2/3*(16*9) = 96 pixels from all above (144) and we should be able to do it in one of 4 kind of ways. I think that we should start from the basic ideas:
1. Making interlaced frames - choosing rows from upper side (1,2)(4,5)(7,9)(6 times 16 give exactly 96)
This way it is possible for us to predict the missed lines mainly by using gradient of some kind - the fading and the hue change we will guess by aproxing our visible lines. The gradient should continue to moment when we would encounter other line or shape from the first sheet would devide them. We can make few different aproxes from different points up, up->rigt, up+down, up+down->right and also from the NW->SE etc. We should find out many possibilities and choose the most trustful.
2. We can make sth like this
CODE
1 X . X
2 X . X
3 . X . X
4 . X . X


And many more - we should make our 4 choices for every different ratio so we could later write them in two bits and codek would know what to do.

3. When we would have video file - there is another issue that can help us making it as HQ as we can. This thing is motion.
We should select how often we would save the so called key frame by default, then we should split our movie into parts where scenes don't change - so it would be compression-fragile materials, then we should devide every one of these clips into intervals as long as space between our key frames. Now we just can use previously used pattern (2) but this time, we could also use previous/next frames to predict the un-sended pixels. We should do the frame aproximation using standard morphing rules, only following these simple rules, that when the movement of camera is -> we should take (from)1/3 power of right frame and (max)2/3 power of left frame when morphing because when we look right we are seeing by 1 eye on the right and by 2 eyes on the left because our left eye is following the right one. Other rule is to make sth like binoculars when moving forward because when we do it our two eyes are merging in the center field and we have roundly shaped blur only in the left and right part of view that are shaped circulary as our eyes. The blur is more poweful the more speed we have.
4. Also there is other method of dividing the big picture - we should consider dividing it by choosing (3x2) (2x3) markers that will split our video into 8 different files (making 4x3 or 3x4 parts). We should not bother, where we are placing the division markers because we would always have to re-puzzle them so we only need to know if 3 or 4 blocks are rather up-down (1) or left-right(0). We would recieve specified dimensions of pictures with matching ends so there should be no problem to match them even without special data. Our compiler should divide the whole screen trying to get finally about 80% of 1 color puzzles (or other achievement making the files smaller). When we are defining the pattern from previous point - there should be also an option to re-locate more pixels to different more morphing puzzles and limiting it in these less morphing.
5. Finally we should consider the 1 into 4 pixels upscaling process for achieving 2 times higher and 2 times wider images. When doing it - we can either do it this way,
CODE
X X              X X        X X
X X              X X        X X

                    X X       X X
                    X X       X X

other way (for some kinds of views and optics)
CODE
             x  X' x
X  ->    X' X X'  where X' are 1/2 powered and x are 1/3 powered where all  
            x  X' x   crosses (3 - NE-SW, N-S-N, NW-SE) are used in 1/3 to make these pixels correctly.


It is important to notice that for expert-quality picture enhancing+enlarging it is necessary to obtain data about the final radius of our lens that sometimes is more than one but always can be as one - presented. We need 3 pcs and we need to make the proper perspective lines on them - our lens will usually round them - by the roundness of perspective shot we can aprox everything we need. the "1-4" method is best for maximum flat img's and "1-KS.5"method is better for round shaped lenses or 3d made shots. After we would have our picture - proper lens data - we should somehow predict the light-ray-immision (either from outside - making depth mask and then calculating n0 of rays comming at us from differently absorbing materials or we can try to make outside emision - using our lense to make our img shot copy in proper distance between us (We should make our shot on evenly distanced screen making colors by crossing light beams (we can not calculate different wavelengths if we want - only 2-3). Very often we can easily map the sun position on our depth-enhanced picture without any problems only by analyse of light gradients on same materials.

OK so I think I made it all clear about perriphials - next time I will focus on the HTML itself.
User is offlinePM
Go to the top of the page
Toggle Multi-post QuotingQuote Post

Reply to this topicStart new topic
2 User(s) are reading this topic (2 Guests and 0 Anonymous Users)
0 Members:

 



- Lo-Fi Version Time is now: 5th December 2024 - 06:12 AM